Colored 3D surface reconstruction using Kinect sensor

Lian-peng Guo , Xiang-ning Chen , Ying Chen , Bin Liu

Optoelectronics Letters ›› 2015, Vol. 11 ›› Issue (2) : 153 -156.

PDF
Optoelectronics Letters ›› 2015, Vol. 11 ›› Issue (2) :153 -156. DOI: 10.1007/s11801-015-5013-2
Article
research-article
Colored 3D surface reconstruction using Kinect sensor
Author information +
History +
PDF

Abstract

A colored 3D surface reconstruction method which effectively fuses the information of both depth and color image using Microsoft Kinect is proposed and demonstrated by experiment. Kinect depth images are processed with the improved joint-bilateral filter based on region segmentation which efficiently combines the depth and color data to improve its quality. The registered depth data are integrated to achieve a surface reconstruction through the colored truncated signed distance fields presented in this paper. Finally, the improved ray casting for rendering full colored surface is implemented to estimate color texture of the reconstruction object. Capturing the depth and color images of a toy car, the improved joint-bilateral filter based on region segmentation is used to improve the quality of depth images and the peak signal-to-noise ratio (PSNR) is approximately 4.57 dB, which is better than 1.16 dB of the joint-bilateral filter. The colored construction results of toy car demonstrate the suitability and ability of the proposed method.

Keywords

Augmented Reality / Depth Image / Iterative Close Point / Kinect Sensor / Region Segmentation

Cite this article

Download citation ▾
Lian-peng Guo, Xiang-ning Chen, Ying Chen, Bin Liu. Colored 3D surface reconstruction using Kinect sensor. Optoelectronics Letters, 2015, 11(2): 153-156 DOI:10.1007/s11801-015-5013-2

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Cui Y, Schuon S, Thrun S, Stricker D, Theobalt C. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2013, 35: 1039

[2]

Khilar R, Chitrakala S, SelvamParvathy S. 3D Image Reconstruction: Techniques, Applications and Challenges. IEEE International Conference on Optical Imaging Sensor and Security. 20131

[3]

Freedman B, Shpunt A, Machline M, Arieli Y. Depth Mapping Using Projected Patterns. 2012

[4]

Cheng Z, Hairong Y, Hong C, Sui W. Journal of Optoelectronics·Laser. 2013, 24: 805

[5]

Lai K, Bo L, Ren X, Fox D. Sparse Distance Learning for Object Recognition Combining RGB and Depth Information. IEEE International Conference on Robotics and Automation (ICRA). 20114007

[6]

Khoshelham K, Elberink S O. Sensors. 2012, 12: 1437

[7]

Herbst E, Henry P, Ren X, Fox D. Toward Object Discovery and Modeling via 3-D Scene Comparison. IEEE International Conference on Robotics and Automation (ICRA). 20112623

[8]

Menna F, Remondino F, Battisti R, Nocerino E. Proceedings of SPIE. 2011, 8085: 80850G

[9]

Newcombe R A, Izadi S, Hilliges O, Molyneaux D. KinectFusion: Real-time Dense Surface Mapping and Tracking. 10th IEEE International Symposium on Mixed and Augmented Reality. 2011127

[10]

Izadi S, Kim D, Hilliges O, Molyneaux D, Newcombe R, Kohli P, Fitzgibbon A. KinectFusion: Real-Time 3D Reconstruction and Interaction Using a Moving Depth Camera. Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology. 2011559

[11]

Massimo C, Salgado L. Proceedings of SPIE. 2012, 8920: 82900E

[12]

Matyunin S, Vatolin D, Berdnikov Y, Smirnov M. Temporal Filtering for Depth Maps Generated by Kinect Depth Camera. 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video. 20111

[13]

Rusinkiewicz S, Hall-Holt O, Levoy M. ACM Transactions on Graphics (TOG). 2002, 21: 438

[14]

Brian C, Levoy M. A Volumetric Method for Building Complex Models from Range Images. Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques. 1996303

[15]

Lorensen William E, Cline H E. ACM Siggraph Computer Graphics. 1987, 21: 163

PDF

114

Accesses

0

Citation

Detail

Sections
Recommended

/