Depth image super-resolution algorithm based on structural features and non-local means

Wang Jing, Wei-Zhong Zhang, Bao-Xiang Huang, Huan Yang

Optoelectronics Letters ›› , Vol. 14 ›› Issue (5) : 391-395.

Optoelectronics Letters ›› , Vol. 14 ›› Issue (5) : 391-395. DOI: 10.1007/s11801-018-8039-4
Article

Depth image super-resolution algorithm based on structural features and non-local means

Author information +
History +

Abstract

The resolution and quality of the depth map captured by depth cameras are limited due to sensor hardware limitations, which becomes a roadblock for further computer vision applications. In order to solve this problem, we propose a new method to enhance low-resolution depth maps using high-resolution color images. The structural-aware term is introduced because of the availability of structural information in color images and the assumption of identical structural features within local neighborhoods of color images and depth images captured from the same scene. We integrate the structural-aware term with color similarity and depth similarity within local neighborhoods to design a local weighting filter based on structural features. To use non-local self-similarity of images, the local weighting filter is combined with the concept of non-local means, and then a non-local weighting filter based on structural features is designed. Some experimental results show that super-resolution depth image can be reconstructed well by the process of the non-local filter and the local filter based on structural features. The proposed method can reconstruct much better high-resolution depth images compared with previously reported methods.

Cite this article

Download citation ▾
Wang Jing, Wei-Zhong Zhang, Bao-Xiang Huang, Huan Yang. Depth image super-resolution algorithm based on structural features and non-local means. Optoelectronics Letters, , 14(5): 391‒395 https://doi.org/10.1007/s11801-018-8039-4

References

[1]
LiY, XueT, SunL, LiuJ. IEEE International Conference on Multimedia and Expo, 2012, 152
[2]
LeeK R, KhoshabehR, NguyenT. Signal Processing Conference IEEE, 2012, 1124
[3]
DiW, ZhangX, HuL, DuanL. Journal of Image and Graphics, 2014, 19: 1162
[4]
XieJ, FerisR S, SunM T. IEEE International Conference on Image Processing, 2015, 25: 3773
[5]
ParkJ, KimH, TaiY-W, BrownM S, KweonI. IEEE International Conference on Computer Vision, 2011, 24: 1623
[6]
YangQ, YangR, DavisJ, NisterD. IEEE Conference on Computer Vision and Pattern Recognition, 2007, 1
[7]
KopfJ, CohenM F, LischinskiD, UyttendaeleM. ACM Transactions on Graphics (TOG), 2007, 26: 96
CrossRef Google scholar
[8]
HeK, SunJ, TangX. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35: 1397
CrossRef Google scholar
[9]
TuY F, ZhangX D, ZhangJ, HuL M. Computer Applications and Software, 2017, 34: 220
[10]
YangY X, WangZ F. Pattern Recognition and Artificial Intelligence, 2013, 26: 454
[11]
ZhangJ. Dissertation for Doctor’s Degree, 2015,
[12]
YuanH X, AnP, WuS Q, ZhengY, TongC Y. Journal of Computer-Aided Design and Computer Graphics, 2016, 28: 2195
[13]
TomasiC, ManduchiR. International Conference on Computer Vision, 1998, 839
[14]
LiuF L, SunM R, CaiW N. Optoelectronics Letters, 2017, 13: 237
CrossRef Google scholar
[15]
BuadesA, CollB, MorelJ M. Siam Journal on Multiscale Modeling and Simulation, 2005, 4: 490
CrossRef Google scholar
[16]
ChenJ, TangC K, WangJ. ACM Transaction on Graphics, 2009, 28: 1
[17]
ScharsteinD, PalC. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2007, 1

This work has been supported by the National Natural Science Foundation of China (No.61602269), Shandong Province Science and Technology Development Project (No.2014GGX101048), China Postdoctoral Science Foundation (No.2015M571993), and Shandong Provincial Natural Science Foundation (No.ZR2017MD004).

Accesses

Citations

Detail

Sections
Recommended

/