A survey for light field super-resolution

Mingyuan Zhao , Hao Sheng , Da Yang , Sizhe Wang , Ruixuan Cong , Zhenglong Cui , Rongshan Chen , Tun Wang , Shuai Wang , Yang Huang , Jiahao Shen

High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100206

PDF (1935KB)
High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100206 DOI: 10.1016/j.hcc.2024.100206
Review Articles
research-article

A survey for light field super-resolution

Author information +
History +
PDF (1935KB)

Abstract

Compared to 2D imaging data, the 4D light field (LF) data retains richer scene’s structure information, which can significantly improve the computer’s perception capability, including depth estimation, semantic segmentation, and LF rendering. However, there is a contradiction between spatial and angular resolution during the LF image acquisition period. To overcome the above problem, researchers have gradually focused on the light field super-resolution (LFSR). In the traditional solutions, researchers achieved the LFSR based on various optimization frameworks, such as Bayesian and Gaussian models. Deep learning-based methods are more popular than conventional methods because they have better performance and more robust generalization capabilities. In this paper, the present approach can mainly divided into conventional methods and deep learning-based methods. We discuss these two branches in light field spatial super-resolution (LFSSR), light field angular super-resolution (LFASR), and light field spatial and angular super-resolution (LFSASR), respectively. Subsequently, this paper also introduces the primary public datasets and analyzes the performance of the prevalent approaches on these datasets. Finally, we discuss the potential innovations of the LFSR to propose the progress of our research field.

Keywords

Light field super-resolution / Convolutional neural network / Transformer / Sub-aperture image / Epipolar-plane image

Cite this article

Download citation ▾
Mingyuan Zhao, Hao Sheng, Da Yang, Sizhe Wang, Ruixuan Cong, Zhenglong Cui, Rongshan Chen, Tun Wang, Shuai Wang, Yang Huang, Jiahao Shen. A survey for light field super-resolution. High-Confidence Computing, 2024, 4(1): 100206 DOI:10.1016/j.hcc.2024.100206

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This study is partially supported by the National Key R&D Program of China (2022YFC3803600), the National Natural Science Foundation of China (62372023), and the Open Fund of the State Key Laboratory of Software Development Environment, PR China (SKLSDE-2023ZX-11). Thank you for the support from HAWKEYE Group.

References

[1]

T. Gustavson, Camera:A History of Photography from Daguerreotype to Digital, Fall River Press, 2009, URL https://books.google.com/books?id=PAUErgEACAAJ.

[2]

A. Gershun, The light field, J. Math. Phys. 18 (1-4) (1939) 51-151.

[3]

E.H. Adelson, J.R. Bergen, et al., The plenoptic function and the elements of early vision, Comput. Models Vis. Process. 1 (2) (1991) 3-20.

[4]

M. Levoy, P. Hanrahan, Light field rendering, in: Seminal Graphics Papers: Pushing the Boundaries, Volume 2, 2023, pp. 441-452.

[5]

M. Levoy, Light fields and computational imaging, Computer 39 (8) (2006) 46-55.

[6]

G. Lippmann, Epreuves reversibles. Photographies integrales, C.R. 146 (1908).

[7]

D. Lanman, Mask-Based Light Field Capture and Display, Brown University, 2011.

[8]

R. Ng, Lytro redefines photography with light field cameras, 2017, Accessed on May.

[9]

C. Perwa, L. Wietzke, Raytrix: Light Filed Technology, Raytrix GmbH, Kiel, Germany, 2018.

[10]

R. Chen, H. Sheng, D. Yang, S. Wang, Z. Cui, R. Cong, Take your model further: a general post-refinement network for light field disparity estimation via BadPix correction,in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, No. 1, 2023, pp. 331-339.

[11]

H. Sheng, Y. Liu, J. Yu, G. Wu, W. Xiong, R. Cong, R. Chen, L. Guo, Y. Xie, S. Zhang, et al., Lfnat 2023 challenge on light field depth estimation: Methods and results,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 3472-3484.

[12]

H. Sheng, R. Cong, D. Yang, R. Chen, S. Wang, Z. Cui, UrbanLF: a comprehensive light field dataset for semantic segmentation of urban scenes, IEEE Trans. Circuits Syst. Video Technol. 32 (11) (2022) 7880-7893.

[13]

R. Cong, D. Yang, R. Chen, S. Wang, Z. Cui, H. Sheng, Combining implicit-explicit view correlation for light field semantic segmentation, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 9172-9181.

[14]

H. Sheng, S. Zhang, X. Liu, Z. Xiong, Relative location for light field saliency detection, in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2016, pp. 1631-1635.

[15]

S. Wanner, B. Goldluecke, Variational light field analysis for disparity estimation and super-resolution, IEEE Trans. Pattern Anal. Mach. Intell. 36 (3) (2013) 606-619.

[16]

A. Levin, F. Durand, Linear view synthesis using a dimensionality gap light field prior, in: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, 2010, pp. 1831-1838.

[17]

S. Zhang, Y. Lin, H. Sheng, Residual networks for light field image superresolution, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 11046-11055.

[18]

Z. Liang, Y. Wang, L. Wang, J. Yang, S. Zhou, Y. Guo, Learning non-local spatial-angular correlation for light field image super-resolution, 2023, arXiv preprint arXiv:2302.08058.

[19]

Y. Wang, L. Wang, G. Wu, J. Yang, W. An, J. Yu, Y. Guo, Disentangling light fields for super-resolution and disparity estimation, IEEE Trans. Pattern Anal. Mach. Intell. 45 (1) (2022) 425-443.

[20]

T. Georgiev, A. Lumsdaine, Lumsdaine, Superresolution with plenoptic 2.0 cameras, in: Signal Recovery and Synthesis, Optica Publishing Group, 2009, p. STuA6.

[21]

C.-K. Liang, R. Ramamoorthi, A light transport framework for lenslet light field cameras, ACM Trans. Graph. 34 (2) (2015) 1-19.

[22]

W.-S. Chan, E.Y. Lam, M.K. Ng, G.Y. Mak, Super-resolution reconstruction in a computational compound-eye imaging system, Multidimens. Syst. Signal Process. 18 (2007) 83-101.

[23]

T.E. Bishop, S. Zanetti, P. Favaro, Light field superresolution, in: 2009 IEEE International Conference on Computational Photography, ICCP, IEEE, 2009, pp. 1-9.

[24]

T.E. Bishop, P. Favaro, The light field camera: Extended depth of field, aliasing, and superresolution, IEEE Trans. Pattern Anal. Mach. Intell. 34 (5) (2011) 972-986.

[25]

M. Rossi, P. Frossard, Graph-based light field super-resolution, in: 2017 IEEE 19th International Workshop on Multimedia Signal Processing, MMSP, IEEE, 2017, pp. 1-6.

[26]

M. Rossi, P. Frossard, Geometry-consistent light field super-resolution via graph-based regularization, IEEE Trans. Image Process. 27 (9) (2018) 4207-4218.

[27]

D. Cho, M. Lee, S. Kim, Y.-W. Tai, Modeling the calibration pipeline of the lytro camera for high quality light-field image reconstruction, in:Proceedings of the IEEE International Conference on Computer Vision, 2013, pp. 3280-3287.

[28]

R.A. Farrugia, C. Galea, C. Guillemot, Super resolution of light field images using linear subspace projection of patch-volumes, IEEE J. Sel. Top. Sign. Proces. 11 (7) (2017) 1058-1071.

[29]

M. Alain, A. Smolic, Light field super-resolution via LFBM5d sparse coding, in: 2018 25th IEEE International Conference on Image Processing, ICIP, IEEE, 2018, pp. 2501-2505.

[30]

M. Alain, A. Smolic, Light field denoising by sparse 5D transform domain collaborative filtering, in: 2017 IEEE 19th International Workshop on Multimedia Signal Processing, MMSP, IEEE, 2017, pp. 1-6.

[31]

K. Egiazarian, V. Katkovnik, Single image super-resolution via BM3D sparse coding, in: 2015 23rd European Signal Processing Conference, EUSIPCO, IEEE, 2015, pp. 2849-2853.

[32]

V. Boominathan, K. Mitra, A. Veeraraghavan, Improving resolution and depth-of-field of light field cameras using a hybrid imaging system, in: 2014 IEEE International Conference on Computational Photography, ICCP, IEEE, 2014, pp. 1-10.

[33]

Y. Wang, Y. Liu, W. Heidrich, Q. Dai, The light field attachment: Turning a DSLR into a light field camera using a low budget camera ring, IEEE Trans. Vis. Comput. Graphics 23 (10) (2016) 2357-2364.

[34]

Y. Wang, F. Liu, K. Zhang, G. Hou, Z. Sun, T. Tan, LFNet: A novel bidirectional recurrent convolutional neural network for light-field image super-resolution, IEEE Trans. Image Process. 27 (9) (2018) 4274-4286.

[35]

Y. Huang, W. Wang, L. Wang, Bidirectional recurrent convolutional networks for multi-frame super-resolution, Adv. Neural Inf. Process. Syst. 28 (2015).

[36]

R.A. Farrugia, C. Guillemot, Light field super-resolution using a low-rank prior and deep convolutional neural networks, IEEE Trans. Pattern Anal. Mach. Intell. 42 (5) (2019) 1162-1175.

[37]

H.W.F. Yeung, J. Hou, X. Chen, J. Chen, Z. Chen, Y.Y. Chung, Light field spatial super-resolution using deep efficient spatial-angular separable convolution, IEEE Trans. Image Process. 28 (5) (2018) 2319-2330.

[38]

L. Wang, X. Dong, Y. Wang, X. Ying, Z. Lin, W. An, Y. Guo, Exploring sparsity in image super-resolution for efficient inference, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4917-4926.

[39]

J. Jin, J. Hou, J. Chen, S. Kwong, Light field spatial super-resolution via deep combinatorial geometry embedding and structural consistency regularization, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2260-2269.

[40]

W. Zhang, W. Ke, D. Yang, H. Sheng, Z. Xiong, Light field super-resolution using complementary-view feature attention, Comput. Vis. Media (2023) 1-16.

[41]

S. Zhang, S. Chang, Y. Lin, End-to-end light field spatial super-resolution network using multiple epipolar geometry, IEEE Trans. Image Process. 30 (2021) 5956-5968.

[42]

Y. Wang, J. Yang, L. Wang, X. Ying, T. Wu, W. An, Y. Guo, Light field image super-resolution using deformable convolution, IEEE Trans. Image Process. 30 (2020) 1057-1071.

[43]

X. Wang, J. Ma, P. Yi, X. Tian, J. Jiang, X.-P. Zhang, Learning an epipolar shift compensation for light field image super-resolution, Inf. Fusion 79 (2022) 188-199.

[44]

G. Liu, H. Yue, J. Wu, J. Yang, Intra-inter view interaction network for light field image super-resolution, IEEE Trans. Multimed. (2021).

[45]

Y. Mo, Y. Wang, C. Xiao, J. Yang, W. An, Dense dual-attention network for light field image super-resolution, IEEE Trans. Circuits Syst. Video Technol. 32 (7) (2021) 4431-4443.

[46]

V. Van Duong, T.N. Huu, J. Yim, B. Jeon, Light field image super-resolution network via joint spatial-angular and epipolar information, IEEE Trans. Comput. Imaging 9 (2023) 350-366.

[47]

J. Jin, J. Hou, J. Chen, S. Kwong, J. Yu, Light field super-resolution via attention-guided fusion of hybrid lenses, in:Proceedings of the 28th ACM International Conference on Multimedia, 2020, pp. 193-201.

[48]

Z. Cheng, Z. Xiong, D. Liu, Light field super-resolution by jointly exploiting internal and external similarities, IEEE Trans. Circuits Syst. Video Technol. 30 (8) (2019) 2604-2616.

[49]

Z. Cheng, Z. Xiong, C. Chen, D. Liu, Z.-J. Zha, Light field super-resolution with zero-shot learning, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10010-10019.

[50]

Y. Wang, Z. Liang, L. Wang, J. Yang, W. An, Y. Guo, Learning a degradation-adaptive network for light field image super-resolution, 2022, arXiv preprint arXiv:2206.06214.

[51]

Z. Liang, Y. Wang, L. Wang, J. Yang, S. Zhou, Light field image super-resolution with transformers, IEEE Signal Process. Lett. 29 (2022) 563-567.

[52]

S. Wang, T. Zhou, Y. Lu, H. Di, Detail-preserving transformer for light field image super-resolution, in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, No. 2, 2022, pp. 2522-2530.

[53]

X. Guo, X. Sang, B. Yan, D. Chen, P. Wang, Light field image super-resolution based on raw data with transformers, J. Opt. Soc. Amer. A 39 (12) (2022) 2131-2141.

[54]

Z. Wang, Y. Lu, Multi-granularity aggregation transformer for light field image super-resolution, in: 2022 IEEE International Conference on Image Processing, ICIP, IEEE, 2022, pp. 261-265.

[55]

R. Cong, H. Sheng, D. Yang, Z. Cui, R. Chen, Exploiting spatial and angular correlations with deep efficient transformers for light field image super-resolution, IEEE Trans. Multimed. (2023).

[56]

T.G. Georgiev, K.C. Zheng, B. Curless, D. Salesin, S.K. Nayar, C. Intwala, Spatio-angular resolution tradeoffs in integral photography, Rendering Techniques 2006 ( 263-272) (2006) 21.

[57]

G. Chaurasia, S. Duchene, O. Sorkine-Hornung, G. Drettakis, Depth synthesis and local warps for plausible image-based navigation, ACM Trans. Graph. 32 (3) (2013) 1-12.

[58]

J. Pearson, M. Brookes, P.L. Dragotti, Plenoptic layer-based modeling for image based rendering, IEEE Trans. Image Process. 22 (9) (2013) 3405-3419.

[59]

S. Pujades, F. Devernay, B. Goldluecke, Bayesian view synthesis and image-based rendering principles, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 3906-3913.

[60]

Z. Zhang, Y. Liu, Q. Dai, Light field from micro-baseline image pair, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3800-3809.

[61]

F.-L. Zhang, J. Wang, E. Shechtman, Z.-Y. Zhou, J.-X. Shi, S.-M. Hu, Plenopatch: Patch-based plenoptic image manipulation, IEEE Trans. Vis. Comput. Graphics 23 (5) (2016) 1561-1573.

[62]

L. Shi, H. Hassanieh, A. Davis, D. Katabi, F. Durand, Light field reconstruction using sparsity in the continuous fourier domain, ACM Trans. Graph. 34 (1) (2014) 1-13.

[63]

S. Vagharshakyan, R. Bregovic, A. Gotchev, Light field reconstruction using shearlet transform, IEEE Trans. Pattern Anal. Mach. Intell. 40 (1) (2017) 133-147.

[64]

S. Vagharshakyan, R. Bregovic, A. Gotchev, Accelerated shearlet-domain light field reconstruction, IEEE J. Sel. Top. Sign. Proces. 11 (7) (2017) 1082-1091.

[65]

N.K. Kalantari, T.-C. Wang, R. Ramamoorthi, Learning-based view synthesis for light field cameras, ACM Trans. Graph. 35 (6) (2016) 1-10.

[66]

H.W.F. Yeung, J. Hou, J. Chen, Y.Y. Chung, X. Chen, Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues, in:Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 137-152.

[67]

J. Flynn, I. Neulander, J. Philbin, N. Snavely, Deepstereo: Learning to predict new views from the world’s imagery,in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 5515-5524.

[68]

G. Wu, Y. Liu, Q. Dai, T. Chai, Learning sheared EPI structure for light field reconstruction, IEEE Trans. Image Process. 28 (7) (2019) 3261-3273.

[69]

J. Jin, J. Hou, J. Chen, H. Zeng, S. Kwong, J. Yu, Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion, IEEE Trans. Pattern Anal. Mach. Intell. 44 (4) (2020) 1819-1836.

[70]

J. Jin, J. Hou, H. Yuan, S. Kwong, Learning light field angular super-resolution via a geometry-aware network, in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, No. 07, 2020, pp. 11141-11148.

[71]

J. Shi, X. Jiang, C. Guillemot, Learning fused pixel and feature-based view reconstructions for light fields, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 2555-2564.

[72]

N. Meng, K. Li, J. Liu, E.Y. Lam, Light field view synthesis via aperture disparity and warping confidence map, IEEE Trans. Image Process. 30 (2021) 3908-3921.

[73]

S. Zhang, S. Chang, Z. Shen, Y. Lin, Micro-lens image stack upsampling for densely-sampled light field reconstruction, IEEE Trans. Comput. Imaging 7 (2021) 799-811.

[74]

P.P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, R. Ng, Learning to synthesize a 4D RGBD light field from a single image, in:Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2243-2251.

[75]

Q. Li, N.K. Kalantari, Synthesizing light field from a single image with variable MPI and two network fusion., ACM Trans. Graph. 39 (6) (2020) 1-10.

[76]

G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, Y. Liu, Light field reconstruction using deep convolutional network on EPI, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6319-6327.

[77]

G. Wu, Y. Liu, L. Fang, Q. Dai, T. Chai, Light field reconstruction using convolutional network on EPI and extended applications, IEEE Trans. Pattern Anal. Mach. Intell. 41 (7) (2018) 1681-1694.

[78]

Y. Wang, F. Liu, Z. Wang, G. Hou, Z. Sun, T. Tan, End-to-end view synthesis for light field imaging with pseudo 4DCNN, in:Proceedings of the European Conference on Computer Vision, ECCV, 2018, pp. 333-348.

[79]

Y. Wang, F. Liu, K. Zhang, Z. Wang, Z. Sun, T. Tan, High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN, IEEE Trans. Comput. Imaging 6 (2020) 830-842.

[80]

G. Wu, Y. Wang, Y. Liu, L. Fang, T. Chai, Spatial-angular attention network for light field reconstruction, IEEE Trans. Image Process. 30 (2021) 8999-9013.

[81]

G. Wu, Y. Liu, L. Fang, T. Chai, Revisiting light field rendering with deep anti-aliasing neural network, IEEE Trans. Pattern Anal. Mach. Intell. 44 (9) (2021) 5430-5444.

[82]

V. Vaish, A. Adams, The (new) stanford light field archive, Comput. Graph. Lab. Stanf. Univ. 6 (7) (2008) 3.

[83]

S. Wanner, S. Meister, B. Goldluecke, Datasets and benchmarks for densely sampled 4D light fields, in: VMV, 13, 2013, pp. 225-226.

[84]

K. Honauer, O. Johannsen, D. Kondermann, B. Goldluecke, A dataset and evaluation methodology for depth estimation on 4D light fields, in: Computer Vision-ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part III 13, Springer, 2017, pp. 19-34.

[85]

M. Rerabek, T. Ebrahimi, New light field image dataset, in: 8th International Conference on Quality of Multimedia Experience, No. CONF, (QoMEX), 2016.

[86]

A.S. Raj, M. Lowney, R. Shah, G. Wetzstein, Stanford lytro light field archive, 2016, LF2016. html.

[87]

M. Le Pendu, X. Jiang, C. Guillemot, Light field inpainting propagation via low rank matrix completion, IEEE Trans. Image Process. 27 (4) (2018) 1981-1993.

[88]

K. Mitra, A. Veeraraghavan, Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior, in: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2012, pp. 22-28.

[89]

S. Wanner, B. Goldluecke, Spatial and angular variational super-resolution of 4D light fields, in: Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part V 12, Springer, 2012, pp. 608-621.

[90]

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, I. So Kweon, Learning a deep convolutional network for light-field image super-resolution, in:Proceedings of the IEEE International Conference on Computer Vision Workshops, 2015, pp. 24-32.

[91]

C. Dong, C.C. Loy, K. He, X. Tang, Learning a deep convolutional network for image super-resolution, in: Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part IV 13, Springer, 2014, pp. 184-199.

[92]

Y. Yoon, H.-G. Jeon, D. Yoo, J.-Y. Lee, I.S. Kweon, Light-field image superresolution using convolutional neural network, IEEE Signal Process. Lett. 24 (6) (2017) 848-852.

[93]

Y. Yuan, Z. Cao, L. Su, Light-field image superresolution using a combined deep CNN based on EPI, IEEE Signal Process. Lett. 25 (9) (2018) 1359-1363.

[94]

B. Lim, S. Son, H. Kim, S. Nah, K. Mu Lee, Enhanced deep residual networks for single image super-resolution, in:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2017, pp. 136-144.

[95]

M. Zhu, A. Alperovich, O. Johannsen, A. Sulc, B. Goldluecke, An epipolar volume autoencoder with adversarial loss for deep light field superresolution, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2019.

[96]

N. Meng, H.K.-H. So, X. Sun, E.Y. Lam, High-dimensional dense residual convolutional neural network for light field reconstruction, IEEE Trans. Pattern Anal. Mach. Intell. 43 (3) (2019) 873-886.

[97]

N. Meng, X. Wu, J. Liu, E. Lam, High-order residual network for light field super-resolution, in:Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, No. 07, 2020, pp. 11757-11764.

[98]

Y. Chen, S. Zhang, S. Chang, Y. Lin, Light field reconstruction using efficient pseudo 4D epipolar-aware structure, IEEE Trans. Comput. Imaging 8 (2022) 397-410.

[99]

K. Ko, Y.J. Koh, S. Chang, C.-S. Kim, Light field super-resolution via adaptive feature remixing, IEEE Trans. Image Process. 30 (2021) 4114-4128.

[100]

Z. Cheng, Y. Liu, Z. Xiong, Spatial-angular versatile convolution for light field reconstruction, IEEE Trans. Comput. Imaging 8 (2022) 1131-1144.

[101]

Y. Wang, L. Wang, J. Yang, W. An, J. Yu, Y. Guo, Spatial-angular interaction for light field image super-resolution, in: Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIII 16, Springer, 2020, pp. 290-308.

[102]

S. Wang, H. Sheng, D. Yang, Z. Cui, R. Cong, W. Ke, MFSRNet: spatialangular correlation retaining for light field super-resolution, Appl. Intell. (2023) 1-19.

[103]

Z. Xiao, Y. Liu, R. Gao, Z. Xiong, Cutmib: Boosting light field superresolution via multi-view image blending,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 1672-1682.

[104]

H. Sheng, S. Wang, D. Yang, R. Cong, Z. Cui, R. Chen, Cross-view recurrence-based self-supervised super-resolution of light field, IEEE Trans. Circuits Syst. Video Technol. (2023).

[105]

M.S.K. Gul, B.K. Gunturk, Spatial and angular resolution enhancement of light fields using convolutional neural networks, IEEE Trans. Image Process. 27 (5) (2018) 2146-2159.

[106]

H. Zhu, M. Guo, H. Li, Q. Wang, A. Robles-Kelly, Revisiting spatioangular trade-off in light field cameras and extended applications in super-resolution, IEEE Trans. Vis. Comput. Graphics 27 (6) (2019) 3019-3033.

[107]

B. Mildenhall, P.P. Srinivasan, M. Tancik, J.T. Barron, R. Ramamoorthi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Commun. ACM 65 (1) (2021) 99-106.

[108]

P. Dhariwal, A. Nichol, Diffusion models beat gans on image synthesis, Adv. Neural Inf. Process. Syst. 34 (2021) 8780-8794.

AI Summary AI Mindmap
PDF (1935KB)

362

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/