PDF
Abstract
Low-light images often have defects such as low visibility, low contrast, high noise, and high color distortion compared with well-exposed images. If the low-light region of an image is enhanced directly, the noise will inevitably blur the whole image. Besides, according to the retina-and-cortex (retinex) theory of color vision, the reflectivity of different image regions may differ, limiting the enhancement performance of applying uniform operations to the entire image. Therefore, we design a Hierarchical Flow Learning (HFL) framework, which consists of a Hierarchical Image Network (HIN) and a normalized invertible Flow Learning Network (FLN). HIN can extract hierarchical structural features from low-light images, while FLN maps the distribution of normally exposed images to a Gaussian distribution using the learned hierarchical features of low-light images. In subsequent testing, the reversibility of FLN allows inferring and obtaining enhanced low-light images. Specifically, the HIN extracts as much image information as possible from three scales, local, regional, and global, using a Triple-branch Hierarchical Fusion Module (THFM) and a Dual-Dconv Cross Fusion Module (DCFM). The THFM aggregates regional and global features to enhance the overall brightness and quality of low-light images by perceiving and extracting more structure information, whereas the DCFM uses the properties of the activation function and local features to enhance images at the pixel-level to reduce noise and improve contrast. In addition, in this paper, the model was trained using a negative log-likelihood loss function. Qualitative and quantitative experimental results demonstrate that our HFL can better handle many quality degradation types in low-light images compared with state-of-the-art solutions. The HFL model enhances low-light images with better visibility, less noise, and improved contrast, suitable for practical scenarios such as autonomous driving, medical imaging, and nighttime surveillance. Outperforming them by PSNR = 27.26 dB, SSIM = 0.93, and LPIPS = 0.10 on benchmark dataset LOL-v1. The source code of HFL is available at https://github.com/Smile-QT/HFL-for-LIE.
Keywords
Low-light image enhancement
/
Flow learning
/
Hierarchical fusion
/
Cross fusion
/
Image processing
Cite this article
Download citation ▾
Xinlin Yuan, Yong Wang, Yan Li, Hongbo Kang, Yu Chen, Boran Yang.
Hierarchical flow learning for low-light image enhancement✩.
, 2025, 11(4): 1158-1172 DOI:10.1016/j.dcan.2024.11.010
| [1] |
D. Wu, Q. Liu, H. Wang, D. Wu, R. Wang, Socially aware energy-efficient mobile edge collaboration for video distribution, IEEE Trans. Multimed. 19 (10) (2017) 2197-2209.
|
| [2] |
J. Guo, J. Ma, Á.F. García-Fernández, Y. Zhang, H. Liang, A survey on image en-hancement for low-light images, Heliyon 9 (4) (2023).
|
| [3] |
D. Wu, J. Yan, H. Wang, D. Wu, R. Wang, Social attribute aware incentive mecha-nism for device-to-device video distribution, IEEE Trans. Multimed. 19 (8) (2017) 1908-1920.
|
| [4] |
J. Kang, J. Wen, D. Ye, B. Lai, T. Wu, Z. Xiong, J. Nie, D. Niyato, Y. Zhang, S. Xie, Blockchain-empowered federated learning for healthcare metaverses: user-centric incentive mechanism with optimal data freshness, IEEE Trans. Cogn.commun. Netw. 10 (1) (2024) 348-362.
|
| [5] |
J. Kang, H. Du, Z. Li, Z. Xiong, S. Ma, D. Niyato, Y. Li, Personalized saliency in task-oriented semantic communications: image transmission and performance analysis, IEEE J. Sel. Areas Commun. 41 (1) (2023) 186-201.
|
| [6] |
Y. Wang, J. Pu, D. Miao, L. Zhang, L. Zhang, X. Du, Scgrfuse: an infrared and visible image fusion network based on spatial/channel attention mechanism and gradient aggregation residual dense blocks, Eng. Appl. Artif. Intell. 132 (2024) 107898.
|
| [7] |
D. Wu, H. Shi, H. Wang, R. Wang, H. Fang, A feature-based learning system for internet of things applications, IEEE Int. Things J. 6 (2) (2018) 1928-1937.
|
| [8] |
D. Wu, S. Si, S. Wu, R. Wang, Dynamic trust relationships aware data privacy pro-tection in mobile crowd-sensing, IEEE Int. Things J. 5 (4) (2017) 2958-2970.
|
| [9] |
G. Brunner, Y. Wang, R. Wattenhofer, S. Zhao, Symbolic music genre transfer with cyclegan, in: 2018 IEEE 30th International Conference on Tools with Artificial Intel-ligence, ICTAI, IEEE, 2018, pp. 786-793.
|
| [10] |
R. Dagli,Diffuseraw: end-to-end generative raw image processing for low-light im-ages, arXiv preprint, arXiv:2402.18575.
|
| [11] |
I. Kobyzev, S.J. Prince, M.A. Brubaker, Normalizing flows: an introduction and re-view of current methods, IEEE Trans. Pattern Anal. Mach. Intell. 43 (11) (2020) 3964-3979.
|
| [12] |
S.M. Pizer, Contrast-limited adaptive histogram equalization: speed and effectiveness Stephen M. Pizer, R. Eugene Johnston, James P. Ericksen, Bonnie C. Yankaskas, Keith E. Muller medical image display research group, in: Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, Georgia, vol. 337, 1990, p. 2.
|
| [13] |
T.K. Kim, J.K. Paik, B.S. Kang, Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering, IEEE Trans. Consum. Electron. 44 (1) (1998) 82-87.
|
| [14] |
E.D. Pisano, S. Zong, B.M. Hemminger, M. DeLuca, R.E. Johnston, K. Muller, M.P. Braeuning, S.M. Pizer, Contrast limited adaptive histogram equalization image pro-cessing to improve the detection of simulated spiculations in dense mammograms, J. Digit. Imag. 11 (1998) 193-200.
|
| [15] |
L.I. Le-Peng, S. Shui-Fa, X. Chong, C. Peng, D. Fang-Min, Survey of histogram equal-ization technology, Comput. Syst. Appl. 23 (3) (2014) 1-8.
|
| [16] |
H. Cui, J. Li, Z. Hua, L. Fan, Attention-guided multi-scale feature fusion network for low-light image enhancement, Front. Neurorobot. 16 (2022) 837208.
|
| [17] |
E.H. Land, J.J. McCann, Lightness and retinex theory, J. Opt. Soc. Am. 61 (1) (1971) 1-11.
|
| [18] |
S. Park, S. Yu, B. Moon, S. Ko, J. Paik, Low-light image enhancement using vari-ational optimization-based retinex model, IEEE Trans. Consum. Electron. 63 (2) (2017) 178-184.
|
| [19] |
D.J. Jobson, Z.-u. Rahman, G.A. Woodell, Properties and performance of a cen-ter/surround retinex, IEEE Trans. Image Process. 6 (3) (1997) 451-462.
|
| [20] |
Z.-u. Rahman, D.J. Jobson, G.A. Woodell,Multi-scale retinex for color image en-hancement, in:Proceedings of 3rd IEEE International Conference on Image Process-ing, vol. 3, IEEE, 1996, pp. 1003-1006.
|
| [21] |
D.J. Jobson, Z.-u. Rahman, G.A. Woodell, A multiscale retinex for bridging the gap between color images and the human observation of scenes, IEEE Trans. Image Pro-cess. 6 (7) (1997) 965-976.
|
| [22] |
S. Wang, J. Zheng, H.-M. Hu, B. Li, Naturalness preserved enhancement algorithm for non-uniform illumination images, IEEE Trans. Image Process. 22 (9) (2013) 3538-3548.
|
| [23] |
X. Guo, Y. Li, H. Ling, Lime: low-light image enhancement via illumination map estimation, IEEE Trans. Image Process. 26 (2) (2016) 982-993.
|
| [24] |
M. Li, J. Liu, W. Yang, X. Sun, Z. Guo, Structure-revealing low-light image enhance-ment via robust retinex model, IEEE Trans. Image Process. 27 (6) (2018) 2828-2841.
|
| [25] |
X. Ren, M. Li, W.-H. Cheng, J. Liu, Joint enhancement and denoising method via sequential decomposition, in: 2018 IEEE International Symposium on Circuits and Systems, ISCAS, IEEE, 2018, pp. 1-5.
|
| [26] |
C. Wei, W. Wang, W. Yang,J. Liu, Deep retinex decomposition for low-light enhance-ment, arXiv preprint, arXiv:1808.04560.
|
| [27] |
Y. Zhang, J. Zhang, X. Guo, Kindling the darkness: a practical low-light image en-hancer,in: Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 1632-1640.
|
| [28] |
S.W. Zamir, A. Arora, S. Khan, M. Hayat, F.S. Khan, M.-H. Yang, Restormer: efficient transformer for high-resolution image restoration,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5728-5739.
|
| [29] |
T. Wang, K. Zhang, T. Shen, W. Luo, B. Stenger, T. Lu, Ultra-high-definition low-light image enhancement: a benchmark and transformer-based method,in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, 2023, pp. 2654-2662.
|
| [30] |
D.P. Kingma, P. Dhariwal, Glow: generative flow with invertible 1x1 convolutions, Adv. Neural Inf. Process. Syst. 31 (2018).
|
| [31] |
L. Dinh, D. Krueger,Y. Bengio, Nice: non-linear independent components estimation, arXiv preprint, arXiv:1410.8516.
|
| [32] |
J. Lei, X. Hu, Y. Wang, D. Liu, Pyramidflow: high-resolution defect contrastive local-ization using pyramid normalizing flow,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14143-14152.
|
| [33] |
K. Oublal, X. Dai, An advanced combination of semi-supervised normalizing flow & yolo (yolonf) to detect and recognize vehicle license plates, arXiv preprint, arXiv: 2207.10777.
|
| [34] |
M. Windsheimer, F. Brand,A. Kaup, Multiscale augmented normalizing flows for image compression, arXiv preprint, arXiv:2305.05451.
|
| [35] |
X. Wei, H. van Gorp, L.G. Carabarin, D. Freedman, Y.C. Eldar, R.J. van Sloun,Image denoising with deep unfolding and normalizing flows, in: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP, IEEE, 2022, pp. 1551-1555.
|
| [36] |
N. Singh, E. Dubey, P. Joshi, R. Prasad, Experimental investigation of the pix2pixhd model for the improvement of the fairly substantial quality of low-light images, in: 2023 IEEE 12th International Conference on Communication Systems and Network Technologies, CSNT, IEEE, 2023, pp. 426-433.
|
| [37] |
L. Shapiro, Computer Vision and Image Processing, Academic Press, 1992.
|
| [38] |
A. Lugmayr, M. Danelljan, L. Van Gool, R. Timofte, Srflow: learning the super-resolution space with normalizing flow,in:Computer Vision-ECCV 2020: 16 th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V 16, Springer, 2020, pp. 715-732.
|
| [39] |
Y. Wang, R. Wan, W. Yang, H. Li, L.-P. Chau, A. Kot,Low-light image enhancement with normalizing flow, in:Proceedings of the AAAI Conference on Artificial Intelli-gence, vol. 36, 2022, pp. 2604-2612.
|
| [40] |
C. Guo, C. Li, J. Guo, C.C. Loy, J. Hou, S. Kwong, R. Cong,Zero-reference deep curve estimation for low-light image enhancement, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1780-1789.
|
| [41] |
F. Lv, F. Lu, J. Wu, C. Lim, Mbllen: low-light image/video enhancement using cnns, in: BMVC, vol. 220, 2018, p. 4.
|
| [42] |
W. Yang, S. Wang, Y. Fang, Y. Wang, J. Liu, Band representation-based semi-supervised low-light image enhancement: bridging the gap between signal fidelity and perceptual quality, IEEE Trans. Image Process. 30 (2021) 3461-3473.
|
| [43] |
Y. Zhang, X. Guo, J. Ma, W. Liu, J. Zhang, Beyond brightening low-light images, Int. J.comput. Vis. 129 (2021) 1013-1037.
|
| [44] |
L. Zhang, L. Zhang, X. Liu, Y. Shen, S. Zhang, S. Zhao,Zero-shot restoration of back-lit images using deep internal learning, in:Proceedings of the 27th ACM International Conference on Multimedia, 2019, pp. 1623-1631.
|
| [45] |
K.G. Lore, A. Akintayo, S. Sarkar, Llnet: a deep autoencoder approach to natural low-light image enhancement, Pattern Recognit. 61 (2017) 650-662.
|
| [46] |
Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, Z. Wang, En-lightengan: deep light enhancement without paired supervision, IEEE Trans. Image Process. 30 (2021) 2340-2349.
|
| [47] |
J. Dang, Y. Zhong, X. Qin, Ppformer: using pixel-wise and patch-wise cross-attention for low-light image enhancement, Comput. Vis. Image Underst. 241 (2024) 103930.
|
| [48] |
H. Fu, W. Zheng, X. Meng, X. Wang, C. Wang, H. Ma,You do not need additional priors or regularizers in retinex-based low-light image enhancement, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 18125-18134.
|
| [49] |
X. Xu, R. Wang, C.-W. Fu, J. Jia,Snr-aware low-light image enhancement, in:Pro-ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 17714-17724.
|
| [50] |
N. Zheng, J. Huang, Q. Zhu, M. Zhou, F. Zhao, Z.-J. Zha, Enhancement by your aesthetic: an intelligible unsupervised personalized enhancer for low-light images,in: Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 6521-6529.
|
| [51] |
Z. Zhou, Z. Shi, W. Ren, Linear contrast enhancement network for low-illumination image enhancement, IEEE Trans. Instrum. Meas. 72 (2022) 1-16.
|
| [52] |
S. Wang, C. Hu, W. Yi, Z. Cai, M. Zhai, W. Yang, Flow learning based dual networks for low-light image enhancement, Neural Process. Lett. 55 (6) (2023) 8115-8130.
|
| [53] |
Y. Feng, C. Zhang, P. Wang, P. Wu, Q. Yan,Y. Zhang, You only need one color space: an efficient network for low-light image enhancement, arXiv preprint, arXiv: 2402.05809.
|
| [54] |
D. Zhou, Z. Yang,Y. Yang, Pyramid diffusion models for low-light image enhance-ment, arXiv preprint, arXiv:2305.10028.
|
| [55] |
H. Jiang, A. Luo, H. Fan, S. Han, S. Liu, Low-light image enhancement with wavelet-based diffusion models, ACM Trans. Graph. 42 (6) (2023) 1-14.
|
| [56] |
Z. Shi, H. Zheng, C. Xu, C. Dong, B. Pan, X. Xie, A. He, T. Li, H. Fu, Resfusion: denoising diffusion probabilistic models for image restoration based on prior residual noise, arXiv e-prints, arXiv:2311.14900, 2023.
|
| [57] |
C. Wei, W. Wang, W. Yang, J. Liu, Deep retinex decomposition for low-light enhance-ment, Comput. Vis. Pattern Recognit. (2018).
|
| [58] |
R. Liu, L. Ma, J. Zhang, X. Fan, Z. Luo,Retinex-inspired unrolling with cooper-ative prior architecture search for low-light image enhancement, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10561-10570.
|
| [59] |
M. Ozcan, H. Ergezer, M. Ayazoğlu, Flight mode on: a feather-light network for low-light image enhancement,in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 4225-4234.
|
| [60] |
Y. Cai, H. Bian, J. Lin, H. Wang, R. Timofte, Y. Zhang, Retinexformer: one-stage retinex-based transformer for low-light image enhancement,in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 12504-12513.
|
| [61] |
W. Yang, W. Wang, H. Huang, S. Wang, J. Liu, Sparse gradient regularized deep retinex network for robust low-light image enhancement, IEEE Trans. Image Process. (2021) 2072-2086.
|
| [62] |
C. Lee, C. Lee, C.-S. Kim, Contrast enhancement based on layered difference repre-sentation of 2d histograms, IEEE Trans. Image Process. 22 (12) (2013) 5372-5384.
|
| [63] |
K. Ma, K. Zeng, Z. Wang, Perceptual quality assessment for multi-exposure image fusion, IEEE Trans. Image Process. 24 (11) (2015) 3345-3356.
|