Light field depth estimation: A comprehensive survey from principles to future

Tun Wang , Hao Sheng , Rongshan Chen , Da Yang , Zhenglong Cui , Sizhe Wang , Ruixuan Cong , Mingyuan Zhao

High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100187

PDF (1501KB)
High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100187 DOI: 10.1016/j.hcc.2023.100187
Review Articles
research-article

Light field depth estimation: A comprehensive survey from principles to future

Author information +
History +
PDF (1501KB)

Abstract

Light Field (LF) depth estimation is an important research direction in the area of computer vision and computational photography, which aims to infer the depth information of different objects in three-dimensional scenes by capturing LF data. Given this new era of significance, this article introduces a survey of the key concepts, methods, novel applications, and future trends in this area. We summarize the LF depth estimation methods, which are usually based on the interaction of radiance from rays in all directions of the LF data, such as epipolar-plane, multi-view geometry, focal stack, and deep learning. We analyze the many challenges facing each of these approaches, including complex algorithms, large amounts of computation, and speed requirements. In addition, this survey summarizes most of the currently available methods, conducts some comparative experiments, discusses the results, and investigates the novel directions in LF depth estimation.

Keywords

Light field / Depth estimation / Deep learning / Sub-aperture image / Epipolar-plane image

Cite this article

Download citation ▾
Tun Wang, Hao Sheng, Rongshan Chen, Da Yang, Zhenglong Cui, Sizhe Wang, Ruixuan Cong, Mingyuan Zhao. Light field depth estimation: A comprehensive survey from principles to future. High-Confidence Computing, 2024, 4(1): 100187 DOI:10.1016/j.hcc.2023.100187

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This study is partially supported by the National Key R&D Program of China (2022YFC3803600), the National Natural Science Foundation of China (62372023), and the Open Fund of the State Key Laboratory of Software Development Environment, China (SKLSDE-2023ZX-11). Thanks for the support from HAWKEYE Group.

References

[1]

G. Wu, B. Masia, A. Jarabo, Y. Zhang, L. Wang, Q. Dai, T. Chai, Y. Liu, Light field image processing: An overview, IEEE J. Sel. Top. Sign. Proces. 11 (7) (2017) 926-954.

[2]

S. Zhang, H. Sheng, D. Yang, J. Zhang, Z. Xiong, Micro-lens-based matching for scene recovery in lenslet cameras, IEEE Trans. Image Process. 27 (3) (2017) 1060-1075.

[3]

H. Sheng, S. Zhang, X. Liu, Z. Xiong, Relative location for light field saliency detection, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2016, pp. 1631-1635.

[4]

H. Sheng, X. Liu, S. Zhang, Saliency analysis based on depth contrast increased, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2016, pp. 1347-1351.

[5]

Z. Cai, X. Liu, X. Peng, B.Z. Gao, Ray calibration and phase mapping for structured-light-field 3D reconstruction, Opt. Express 26 (6) (2018) 7598-7613.

[6]

R. Cong, D. Yang, R. Chen, S. Wang, Z. Cui, H. Sheng, Combining implicit-explicit view correlation for light field semantic segmentation, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9172-9181.

[7]

J. Fiss, B. Curless, R. Szeliski, Light field layer matting, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 623-631.

[8]

D. Yang, T. Zhu, S. Wang, S. Wang, Z. Xiong, LFRSNet: A robust light field semantic segmentation network combining contextual and geometric features, Front. Environ. Sci. 10 (2022) 996513.

[9]

A. Gershun, The light field, J. Math. Phys. 18 (1-4) (1939) 51-151.

[10]

P. Moon, D.E. Spencer, The Photic Field, Cambridge, 1981.

[11]

E.H. Adelson, J.R. Bergen, et al., The plenoptic function and the elements of early vision, Comput. Models Visual Process. 1 (2) (1991) 3-20.

[12]

L. McMillan, G. Bishop, Plenoptic modeling: An image-based rendering system, in: Seminal Graphics Papers: Pushing the Boundaries, Vol. 2, 2023, pp. 433-440.

[13]

M. Levoy, P. Hanrahan, Light field rendering, in:Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, 1996, pp. 31-42.

[14]

M. Levoy, R. Ng, A. Adams, M. Footer, M. Horowitz, Light field microscopy, in: Acm Siggraph 2006 Papers, 2006, pp. 924-934.

[15]

M. Levoy, Light fields and computational imaging, Computer 39 (8) (2006) 46-55.

[16]

I. Ihrke, J. Restrepo, L. Mignard-Debise, Principles of light field imaging: Briefly revisiting 25 years of research, IEEE Signal Process. Mag. 33 (5) (2016) 59-69.

[17]

G. Wetzstein, I. Ihrke, D. Lanman, W. Heidrich, Computational plenoptic imaging, Comput. Graph. Forum 30 (8) (2011) 2397-2426.

[18]

S.J. Gortler, R. Grzeszczuk, R. Szeliski, M.F. Cohen, The lumigraph, in: Seminal Graphics Papers: Pushing the Boundaries, Vol. 2, 2023, pp. 453-464.

[19]

S. Birchfield, C. Tomasi, Depth discontinuities by pixel-to-pixel stereo, Int. J. Comput. Vis. 35 (1999) 269-293.

[20]

R.C. Bolles, H.H. Baker, D.H. Marimont, Epipolar-plane image analysis: An approach to determining structure from motion, Int. J. Comput. Vis. 1 (1) (1987) 7-55.

[21]

S. Wanner, S. Meister, B. Goldluecke, Datasets and benchmarks for densely sampled 4D light fields, in: VMV, Vol. 13, 2013, pp. 225-226.

[22]

K. Honauer, O. Johannsen, D. Kondermann, B. Goldluecke, A dataset and evaluation methodology for depth estimation on 4D light fields, in: Computer Vision-ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III 13, Springer, 2017, pp. 19-34.

[23]

O. Johannsen, K. Honauer, B. Goldluecke, A. Alperovich, F. Battisti, Y. Bok, M. Brizzi, M. Carli, G. Choe, M. Diebold, et al., A taxonomy and evaluation of dense light field depth estimation algorithms,in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 82-99.

[24]

J. Shi, X. Jiang, C. Guillemot, A framework for learning depth from a flexible subset of dense and sparse light field views, IEEE Trans. Image Process. 28 (12) (2019) 5867-5880.

[25]

A.S. Raj, M. Lowney, R. Shah, G. Wetzstein, "Stanford LYTRO light field archive", 2016, https://web.media.mit.edu/-gordonw/SyntheticLightFields/index.php/.

[26]

M. Rerabek, T. Ebrahimi, New light field image dataset, in:8th International Conference on Quality of Multimedia Experience, no. CONF, QoMEX, 2016.

[27]

H. Sheng, R. Cong, D. Yang, R. Chen, S. Wang, Z. Cui, UrbanLF: A comprehensive light field dataset for semantic segmentation of urban scenes, IEEE Trans. Circuits Syst. Video Technol. 32 (11) (2022) 7880-7893.

[28]

R. Ziegler, S. Bucheli, L. Ahrenberg, M. Magnor, M. Gross, A bidirectional light field-hologram transform, Comput. Graph. Forum 26 (3) (2007) 435-446.

[29]

R. Ziegler, P. Kaufmann, M. Gross, A framework for holographic scene representation and image synthesis, in: ACM SIGGRAPH 2006 Sketches, 2006, pp. 108-es.

[30]

S. Zhang, H. Sheng, C. Li, J. Zhang, Z. Xiong, Robust depth estimation for light field via spinning parallelogram operator, Comput. Vis. Image Underst. 145 (2016) 148-159.

[31]

H. Sheng, P. Zhao, S. Zhang, J. Zhang, D. Yang, Occlusion-aware depth estimation for light field using multi-orientation EPIs, Pattern Recognit. 74 (2018) 587-599.

[32]

W. Wang, Y. Lin, S. Zhang, Enhanced spinning parallelogram operator combining color constraint and histogram integration for robust light field depth estimation, IEEE Signal Process. Lett. 28 (2021) 1080-1084.

[33]

S. Wanner, B. Goldluecke, Globally consistent depth labeling of 4D light fields, in: 2012 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2012, pp. 41-48.

[34]

S. Wanner, B. Goldluecke, Variational light field analysis for disparity estimation and super-resolution, IEEE Trans. Pattern Anal. Mach. Intell. 36 (3) (2013) 606-619.

[35]

T. Pock, D. Cremers, H. Bischof, A. Chambolle, Global solutions of variational models with convex regularization, SIAM J. Imaging Sci. 3 (4) (2010) 1122-1145.

[36]

E. Strekalovskiy, D. Cremers, Generalized ordering constraints for multilabel optimization, in: 2011 International Conference on Computer Vision, IEEE, 2011, pp. 2619-2626.

[37]

J. Li, M. Lu, Z.-N. Li, Continuous depth map reconstruction from light fields, IEEE Trans. Image Process. 24 (11) (2015) 3257-3265.

[38]

D. Comaniciu, P. Meer, Mean shift: A robust approach toward feature space analysis, IEEE Trans. Pattern Anal. Mach. Intell. 24 (5) (2002) 603-619.

[39]

Y. Zhang, H. Lv, Y. Liu, H. Wang, X. Wang, Q. Huang, X. Xiang, Q. Dai, Light-field depth estimation via epipolar plane image analysis and locally linear embedding, IEEE Trans. Circuits Syst. Video Technol. 27 (4) (2016) 739-747.

[40]

X. Chen, D. Zou, Q. Zhao, P. Tan, Manifold preserving edit propagation, ACM Trans. Graph. 31 (6) (2012) 1-7.

[41]

S.T. Roweis, L.K. Saul, Nonlinear dimensionality reduction by locally linear embedding, Science 290 (5500) (2000) 2323-2326.

[42]

J. Chen, J. Hou, Y. Ni, L.-P. Chau, Accurate light field depth estimation with superpixel regularization over partially occluded regions, IEEE Trans. Image Process. 27 (10) (2018) 4889-4900.

[43]

O. Johannsen, A. Sulc, B. Goldluecke, What sparse light field coding reveals about scene structure, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3262-3270.

[44]

J. Li, X. Jin, EPI-neighborhood distribution based light field depth estimation, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2020, pp. 2003-2007.

[45]

P. Zhou, X. Liu, J. Jin, Y. Zhang, J. Hou, Light field depth estimation based on stitched-EPI, 2022, arXiv preprint arXiv:2203.15201.

[46]

S. Heber, R. Ranftl, T. Pock, Variational shape from light field, in:Energy Minimization Methods in Computer Vision and Pattern Recognition: 9th International Conference, EMMCVPR 2013, Lund, Sweden, August 19-21, 2013. Proceedings 9, Springer, 2013, pp. 66-79.

[47]

F. Frigerio, et al., 3-Dimensional Surface Imaging Using Active Wavefront Sampling (Ph.D. thesis), Massachusetts Institute of Technology, 2006.

[48]

K. Bredies, K. Kunisch, T. Pock, Total generalized variation, SIAM J. Imaging Sci. 3 (3) (2010) 492-526.

[49]

S. Heber, T. Pock, Shape from light field meets robust PCA, in: European Conference on Computer Vision, Springer, 2014, pp. 751-767.

[50]

C. Chen, H. Lin, Z. Yu, S. Bing Kang, J. Yu, Light field stereo matching using bilateral statistics of surface cameras, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1518-1525.

[51]

C. Tomasi, R. Manduchi, Bilateral filtering for gray and color images, in: Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), IEEE, 1998, pp. 839-846.

[52]

H.-G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y.-W. Tai, I. So Kweon, Accurate depth map estimation from a lenslet light field camera, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1547-1555.

[53]

F. Liu, G. Hou, Z. Sun, T. Tan, High quality depth map estimation of object surface from light-field images, Neurocomputing 252 (2017) 3-16.

[54]

C.-T. Huang, Robust pseudo random fields for light-field stereo matching, in:Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 11-19.

[55]

H. Sheng, S. Zhang, G. Zhu, Z. Xiong, Guided integral filter for light field stereo matching, in: 2015 IEEE International Conference on Image Processing, ICIP, IEEE, 2015, pp. 852-856.

[56]

H. Sheng, S. Zhang, X. Cao, Y. Fang, Z. Xiong, Geometric occlusion analysis in depth estimation using integral guided filter for light-field image, IEEE Trans. Image Process. 26 (12) (2017) 5758-5771.

[57]

I.K. Park, K.M. Lee, et al., Robust light field depth estimation using occlusion-noise aware data costs, IEEE Trans. Pattern Anal. Mach. Intell. 40 (10) (2017) 2484-2497.

[58]

K. Han, W. Xiang, E. Wang, T. Huang, A novel occlusion-aware vote cost for light field depth estimation, IEEE Trans. Pattern Anal. Mach. Intell. 44 (11) (2021) 8022-8035.

[59]

Q. Zhang, L. Xu, J. Jia, 100+ Times faster Weighted Median Filter (WMF),in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 2830-2837.

[60]

C. Liu, L. Shi, X. Zhao, J. Qiu, Adaptive matching norm based disparity estimation from light field data, Signal Process. 209 (2023) 109042.

[61]

X. Wang, W. Chao, L. Wang, F. Duan, Light field depth estimation using occlusion-aware consistency analysis, Vis. Comput. (2023) 1-14.

[62]

Z. Yu, X. Guo, H. Lin, A. Lumsdaine, J. Yu, Line assisted light field triangulation and stereo matching, in:Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 2792-2799.

[63]

J.R. Shewchuk, General-dimensional constrained delaunay and constrained regular triangulations, I: Combinatorial properties,in:Twentieth Anniversary Volume: Discrete & Computational Geometry, Springer, 2008, pp. 1-58.

[64]

V. Kolmogorov, R. Zabin, What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26 (2) (2004) 147-159.

[65]

L. Si, Q. Wang, Dense depth-map estimation and geometry inference from light fields via global optimization, in: Computer Vision-ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III 13, Springer, 2017, pp. 83-98.

[66]

R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, C. Rother, A comparative study of energy minimization methods for Markov random fields with smoothness-based priors, IEEE Trans. Pattern Anal. Mach. Intell. 30 (6) (2008) 1068-1080.

[67]

Y. Boykov, O. Veksler, R. Zabih, Fast approximate energy minimization via graph cuts, IEEE Trans. Pattern Anal. Mach. Intell. 23 (11) (2001) 1222-1239.

[68]

H. Schilling, M. Diebold, C. Rother, B. Jähne, Trust your model: Light field depth estimation with inline occlusion handling,in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4530-4538.

[69]

C. Barnes, E. Shechtman, A. Finkelstein, D.B. Goldman, PatchMatch: A randomized correspondence algorithm for structural image editing, ACM Trans. Graph. 28 (3) (2009) 24.

[70]

H.-G. Jeon, J. Park, G. Choe, J. Park, Y. Bok, Y.-W. Tai, I.S. Kweon, Depth from a light field image with learning-based matching costs, IEEE Trans. Pattern Anal. Mach. Intell. 41 (2) (2018) 297-310.

[71]

Z. Zhao, S. Cheng, L. Li, Robust depth estimation on real-world light field images using Gaussian belief propagation, Image Vis. Comput. 122 (2022) 104447.

[72]

C. Liu, et al., Beyond Pixels: Exploring New Representations and Applications for Motion Analysis (Ph.D. thesis), Massachusetts Institute of Technology, 2009.

[73]

Y. Anisimov, J.R. Rambach, D. Stricker, Nonlinear optimization of light field point cloud, Sensors 22 (3) (2022) 814.

[74]

C. Jia, F. Shi, M. Zhao, Object detection based on light field imaging, in: 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design, CSCWD, IEEE, 2022, pp. 239-244.

[75]

M.W. Tao, S. Hadap, J. Malik, R. Ramamoorthi, Depth from combining defocus and correspondence using light-field cameras, in:Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 673-680.

[76]

M.W. Tao, P.P. Srinivasan, J. Malik, S. Rusinkiewicz, R. Ramamoorthi, Depth from shading, defocus, and correspondence using light-field an- gular coherence, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 1940-1948.

[77]

T.-C. Wang, A.A. Efros, R. Ramamoorthi, Occlusion-aware depth estimation using light-field cameras, in:Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 3487-3495.

[78]

T.-C. Wang, A.A. Efros, R. Ramamoorthi, Depth estimation with occlusion modeling using light-field cameras, IEEE Trans. Pattern Anal. Mach. Intell. 38 (11) (2016) 2170-2181.

[79]

W. Williem, I.K. Park, Robust light field depth estimation for noisy scene with occlusion, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 4396-4404.

[80]

H. Zhu, Q. Wang, J. Yu, Occlusion-model guided antiocclusion depth estimation in light field, IEEE J. Sel. Top. Sign. Proces. 11 (7) (2017) 965-978.

[81]

A. Neri, M. Carli, F. Battisti, A multi-resolution approach to depth field estimation in dense image arrays, in: 2015 IEEE International Conference on Image Processing, ICIP, IEEE, 2015, pp. 3358-3362.

[82]

M. Strecke, A. Alperovich, B. Goldluecke, Accurate depth and normal maps from occlusion-aware focal stack symmetry, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2814-2822.

[83]

Z. Guo, J. Wu, X. Chen, S. Ma, L. Zhu, P. Yang, B. Xu, Accurate light field depth estimation using multi-orientation partial angular coherence, IEEE Access 7 (2019) 169123-169132.

[84]

Y. Zhang, W. Dai, M. Xu, J. Zou, X. Zhang, H. Xiong, Depth estimation from light field using graph-based structure-aware analysis, IEEE Trans. Circuits Syst. Video Technol. 30 (11) (2019) 4269-4283.

[85]

S. Heber, T. Pock, Convolutional networks for shape from light field, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3746-3754.

[86]

A. Chambolle, T. Pock, A first-order primal-dual algorithm for convex problems with applications to imaging, J. Math. Imaging Vis. 40 (2011) 120-145.

[87]

Y. Luo, W. Zhou, J. Fang, L. Liang, H. Zhang, G. Dai, Epi-patch based convolutional neural network for depth estimation on 4D light field, in: Neural Information Processing: 24th International Conference, ICONIP 2017, Guangzhou, China, November 14-18, 2017, Proceedings, Part III 24, Springer, 2017, pp. 642-652.

[88]

S. Heber, W. Yu, T. Pock, Neural epi-volume networks for shape from light field, in:Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2252-2260.

[89]

S. Heber, W. Yu, T. Pock, U-shaped networks for shape from light field, in: BMVC, Vol. 3, 2016, p. 5.

[90]

M. Feng, Y. Wang, J. Liu, L. Zhang, H.F. Zaki, A. Mian, Benchmark data set and method for depth estimation from light field images, IEEE Trans. Image Process. 27 (7) (2018) 3586-3598.

[91]

C. Shin, H.-G. Jeon, Y. Yoon, I.S. Kweon, S.J. Kim, Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images,in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4748-4757.

[92]

W. Zhou, L. Liang, H. Zhang, A. Lumsdaine, L. Lin, Scale and orientation aware epi-patch learning for light field depth estimation, in: 2018 24th International Conference on Pattern Recognition, ICPR, IEEE, 2018, pp. 2362-2367.

[93]

T. Leistner, H. Schilling, R. Mackowiak, S. Gumhold, C. Rother, Learning to think outside the box: Wide-baseline light field depth estimation with EPI-shift, in: 2019 International Conference on 3D Vision, 3DV, IEEE, 2019, pp. 249-257.

[94]

O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 2015, pp. 234-241.

[95]

K. Li, J. Zhang, R. Sun, X. Zhang, J. Gao, Epi-based oriented relation networks for light field depth estimation, 2020, arXiv preprint arXiv: 2007.04538.

[96]

M. Gao, H. Deng, S. Xiang, J. Wu, Z. He, EPI light field depth estimation based on a directional relationship model and multiviewpoint attention mechanism, Sensors 22 (16) (2022) 6291.

[97]

C. Fu, H. Yuan, H. Xu, H. Zhang, L. Shen, TMSO-Net: Texture adaptive multi-scale observation for light field image depth estimation, J. Vis. Commun. Image Represent. 90 (2023) 103731.

[98]

T. Wang, R. Chen, R. Cong, D. Yang, Z. Cui, F. Li, H. Sheng, EPI-guided cost construction network for light field disparity estimation, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3437-3445.

[99]

Y.-J. Tsai, Y.-L. Liu, M. Ouhyoung, Y.-Y. Chuang, Attention-based view selection networks for light-field disparity estimation, in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, no. 07, 2020, pp. 12095-12103.

[100]

Z. Huang, X. Hu, Z. Xue, W. Xu, T. Yue, Fast light-field disparity estimation with multi-disparity-scale cost aggregation, in:Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6320-6329.

[101]

Y. Li, Q. Wang, L. Zhang, G. Lafruit, A lightweight depth estimation network for wide-baseline light fields, IEEE Trans. Image Process. 30 (2021) 2288-2300.

[102]

Y. Wang, L. Wang, G. Wu, J. Yang, W. An, J. Yu, Y. Guo, Disentangling light fields for super-resolution and disparity estimation, IEEE Trans. Pattern Anal. Mach. Intell. 45 (1) (2022) 425-443.

[103]

Y. Wang, L. Wang, Z. Liang, J. Yang, W. An, Y. Guo, Occlusion-aware cost constructor for light field depth estimation, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 19809-19818.

[104]

J. Chen, S. Zhang, Y. Lin, Attention-based multi-level fusion network for light field depth estimation, in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, no. 2, 2021, pp. 1009-1017.

[105]

W. Chao, X. Wang, Y. Wang, L. Chang, F. Duan, Learning sub-pixel disparity distribution for light field depth estimation, 2022, arXiv preprint arXiv:2208.09688.

[106]

T.-Y. Lin, P. Goyal, R. Girshick, K. He, P. Dollár, Focal loss for dense object detection, in:Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2980-2988.

[107]

X. Li, W. Wang, L. Wu, S. Chen, X. Hu, J. Li, J. Tang, J. Yang, Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection, Adv. Neural Inf. Process. Syst. 33 (2020) 21002-21012.

[108]

R. Chen, H. Sheng, D. Yang, S. Wang, Z. Cui, R. Cong, Take your model further: A general post-refinement network for light field disparity estimation via BadPix correction,in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, no. 1, 2023, pp. 331-339.

[109]

W. Chao, F. Duan, X. Wang, Y. Wang, G. Wang, OccCasNet: Occlusion-aware cascade cost volume for light field depth estimation, 2023, arXiv preprint arXiv:2305.17710.

[110]

X. Wang, C. Tao, Z. Zheng, Occlusion-aware light field depth estimation with view attention, Opt. Lasers Eng. 160 (2023) 107299.

[111]

M. Xiao, C. Lv, X. Liu, FPattNet: A multi-scale feature fusion network with occlusion awareness for depth estimation of light field images, Sensors 23 (17) (2023) 7480.

[112]

T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, S. Belongie, Feature pyramid networks for object detection, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 2117-2125.

[113]

W. Zhou, E. Zhou, Y. Yan, L. Lin, A. Lumsdaine, Learning depth cues from focal stack for light field depth estimation, in: 2019 IEEE International Conference on Image Processing, ICIP, IEEE, 2019, pp. 1074-1078.

[114]

W. Zhou, X. Wei, Y. Yan, W. Wang, L. Lin, A hybrid learning of multimodal cues for light field depth estimation, Digit. Signal Process. 95 (2019) 102585.

[115]

X. Liu, D. Fu, C. Wu, Z. Si, The depth estimation method based on double- cues fusion for light field images, in:Proceedings of the 11th International Conference on Modelling, Identification and Control, ICMIC2019, Springer, 2020, pp. 719-726.

[116]

Y. Zhang, Y. Piao, X. Ji, M. Zhang, Dynamic fusion network for light field depth estimation, in: Chinese Conference on Pattern Recognition and Computer Vision, PRCV, Springer, 2021, pp. 3-15.

[117]

Y. Piao, X. Ji, M. Zhang, Y. Zhang, Learning multi-modal information for robust light field depth estimation, 2021, arXiv preprint arXiv:2104. 05971.

[118]

H. Ma, H. Li, Z. Qian, S. Shi, T. Mu, VommaNet: An end-to-end network for disparity estimation from reflective and texture-less light field images, 2018, arXiv preprint arXiv:1811.07124.

[119]

Y. Li, L. Zhang, Q. Wang, G. Lafruit, MANet: Multi-scale aggregated network for light field depth estimation, in: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2020, pp. 1998-2002.

[120]

C. Guo, J. Jin, J. Hou, J. Chen, Accurate light field depth estimation via an occlusion-aware network, in: 2020 IEEE International Conference on Multimedia and Expo, ICME, IEEE, 2020, pp. 1-6.

[121]

D. Ma, A. Lumsdaine, Fast and efficient neural network for light field disparity estimation, in: 2020 25th International Conference on Pattern Recognition, ICPR, IEEE, 2021, pp. 2920-2926.

[122]

G. Huang, Z. Liu, L. Van Der Maaten, K.Q. Weinberger, Densely connected convolutional networks, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4700-4708.

[123]

L. Han, X. Huang, Z. Shi, S. Zheng, Depth estimation from light field geometry using convolutional neural networks, Sensors 21 (18) (2021) 6061.

[124]

L. Han, X. Huang, Z. Shi, S. Zheng, Learning depth from light field via deep convolutional neural network, in: Big Data and Security: Second International Conference, ICBDS 2020, Singapore, Singapore, December 20-22, 2020, Revised Selected Papers 2, Springer, 2021, pp. 485-496.

[125]

L. Han, S. Zheng, Z. Shi, M. Xia, Exploiting sequence analysis for accurate light-field depth estimation, IEEE Access (2023).

[126]

Z. Shi, S. Zheng, X. Huang, M. Xu, L. Han, Light-field depth estimation using RNN and CRF, in: 2022 7th International Conference on Image, Vision and Computing, ICIVC, IEEE, 2022, pp. 725-729.

[127]

A. C.S. Kumar, S.M. Bhandarkar, M. Prasad, Depthnet: A recurrent neural network architecture for monocular depth prediction,in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 283-291.

[128]

R. Kreuzig, M. Ochs, R. Mester, DistanceNet: Estimating traveled distance from monocular images using a recurrent convolutional neural network,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

[129]

E. Ricci, W. Ouyang, X. Wang, N. Sebe, et al., Monocular depth estimation using multi-scale continuous CRFs as sequential deep networks, IEEE Trans. Pattern Anal. Mach. Intell. 41 (6) (2018) 1426-1440.

[130]

J. Peng, Z. Xiong, D. Liu, X. Chen, Unsupervised depth estimation from light field using a convolutional neural network, in: 2018 International Conference on 3D Vision, 3DV, IEEE, 2018, pp. 295-303.

[131]

W. Zhou, E. Zhou, G. Liu, L. Lin, A. Lumsdaine, Unsupervised monocular depth estimation from light field image, IEEE Trans. Image Process. 29 (2019) 1606-1617.

[132]

A. Alperovich, O. Johannsen, M. Strecke, B. Goldluecke, Light field intrinsics with a deep encoder-decoder network, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 9145-9154.

[133]

J. Peng, Z. Xiong, Y. Wang, Y. Zhang, D. Liu, Zero-shot depth estimation from light field using a convolutional neural network, IEEE Trans. Comput. Imaging 6 (2020) 682-696.

[134]

T. Iwatsuki, K. Takahashi, T. Fujii, Unsupervised disparity estimation from light field using plug-and-play weighted warping loss, Signal Process., Image Commun. 107 (2022) 116764.

[135]

P. Li, J. Zhao, J. Wu, C. Deng, Y. Han, H. Wang, T. Yu, Opal: Occlusion pattern aware loss for unsupervised light field disparity estimation, IEEE Trans. Pattern Anal. Mach. Intell. (2023).

[136]

S. Zhang, N. Meng, E.Y. Lam, Unsupervised light field depth estimation via multi-view feature matching with occlusion prediction, 2023, arXiv preprint arXiv:2301.08433.

[137]

A. Criminisi, S.B. Kang, R. Swaminathan, R. Szeliski, P. Anandan, Extracting layers and analyzing their specular properties using epipolar-plane-image analysis, Comput. Vis. Image Underst. 97 (1) (2005) 51-85.

[138]

D.N. Bhat, S.K. Nayar, Stereo in the presence of specular reflection, in:Proceedings of IEEE International Conference on Computer Vision, IEEE, 1995, pp. 1086-1092.

[139]

Y. Li, S. Lin, H. Lu, S.B. Kang, H.-Y. Shum, Multibaseline stereo in the presence of specular reflections, in: 2002 International Conference on Pattern Recognition, Vol. 3, IEEE, 2002, pp. 573-576.

[140]

J.Y. Lee, R.-H. Park, Depth estimation from light field by accumulating binary maps based on foreground-background separation, IEEE J. Sel. Top. Sign. Proces. 11 (7) (2017) 955-964.

[141]

J.Y. Lee, R.-H. Park, Separation of foreground and background from light field using gradient information, Appl. Opt. 56 (4) (2017) 1069-1078.

[142]

Z. Cui, H. Sheng, D. Yang, S. Wang, R. Chen, W. Ke, Light field depth estimation for non-lambertian objects via adaptive cross operator, IEEE Trans. Circuits Syst. Video Technol. (2023).

[143]

X. Jiang, M. Le Pendu, C. Guillemot, Depth estimation with occlusion handling from a sparse set of light field views, in: 2018 25th IEEE International Conference on Image Processing, ICIP, IEEE, 2018, pp. 634-638.

[144]

N. Khan, M.H. Kim, J. Tompkin, Edge-aware bidirectional diffusion for dense depth estimation from light fields, in:British Machine Vision Conference, BMVC, 2021.

[145]

N. Khan, M.H. Kim, J. Tompkin, View-consistent 4d light field depth estimation, 2020, arXiv preprint arXiv:2009.04065.

[146]

N. Khan, M.H. Kim, J. Tompkin, Fast and Accurate 4D Light Field Depth Estimation, Tech. Rep. CS-20-01, Brown University.

AI Summary AI Mindmap
PDF (1501KB)

344

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/