Retinex based low-light image enhancement using guided filtering and variational framework

Shi Zhang, Gui-jin Tang, Xiao-hua Liu, Su-huai Luo, Da-dong Wang

Optoelectronics Letters ›› , Vol. 14 ›› Issue (2) : 156-160.

Optoelectronics Letters ›› , Vol. 14 ›› Issue (2) : 156-160. DOI: 10.1007/s11801-018-7208-9
Article

Retinex based low-light image enhancement using guided filtering and variational framework

Author information +
History +

Abstract

A new image enhancement algorithm based on Retinex theory is proposed to solve the problem of bad visual effect of an image in low-light conditions. First, an image is converted from the RGB color space to the HSV color space to get the V channel. Next, the illuminations are respectively estimated by the guided filtering and the variational framework on the V channel and combined into a new illumination by average gradient. The new reflectance is calculated using V channel and the new illumination. Then a new V channel obtained by multiplying the new illumination and reflectance is processed with contrast limited adaptive histogram equalization (CLAHE). Finally, the new image in HSV space is converted back to RGB space to obtain the enhanced image. Experimental results show that the proposed method has better subjective quality and objective quality than existing methods.

Cite this article

Download citation ▾
Shi Zhang, Gui-jin Tang, Xiao-hua Liu, Su-huai Luo, Da-dong Wang. Retinex based low-light image enhancement using guided filtering and variational framework. Optoelectronics Letters, , 14(2): 156‒160 https://doi.org/10.1007/s11801-018-7208-9

References

[1]
JingcaoY., HongboL., JunpingD., PengchengH.. IEEE International Conference on Cloud Computing and Intelligence Systems, 2014, 639
[2]
PriteeS. R., AmitC.. International Conference on Computational Intelligence and Communication Networks, 2015, 242
[3]
BidwaiP., TuptewarD. J.. IEEE International Conference on Information Processing, 2015, 511
[4]
WanY., ChenQ.. Visual Communications and Image Processing, 2015, 1
[5]
LeeH.-G., YangS., SimJ.-Y.. Signal and Information Processing Association Annual Summit and Conference, 2015, 884
[6]
JianC., JiahongB.. 8th International Congress on Image and Signal Processing, 2015, 257
[7]
RahulR., Shishir ParamathmaR., Sos SA., KarenP.. IEEE International Conference on Systems, Man, and Cybernetics, 2016, 002341
[8]
ThasniA. N., DeepthiV. R., FrancisA. B.. International Conference on Next Generation Intelligent Systems, 2016, 1
[9]
DengH., SunX., YeC., ZhouX., LiuM.. IET Image Processing, 2016, 10: 701
CrossRef Google scholar
[10]
WangL., XiaoL., LiuH., WeiZ.. IET Image Processing, 2014, 9: 43
CrossRef Google scholar
[11]
WalhaR., DriraF., LebourgeoisF., AlimiA. M., GarciaC.. IET Image Processing, 2016, 10: 325
CrossRef Google scholar
[12]
ZhangL., ShenP., PengX., ZhuG., SongJ., WeiW., SongH.. IET Image Processing, 2016, 10: 840
CrossRef Google scholar
[13]
HeK., SunJ., TangX.. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35: 1397
CrossRef Google scholar
[14]
KimmelR., EladM., ShakedD., KeshetR., SobelI.. International Journal of Computer Vision, 2003, 52: 7
CrossRef Google scholar
[15]
FuX., SunY., LiWangM., HuangY.. Xiao-Ping Zhang and Xinghao Ding, IEEE International Conference on Acoustics, Speech and Signal Processing, 2014, 1190
[16]
DengG., CahillL.W., TobinG.R.. IEEE Transactions on Image Processing, 1995, 4: 506
CrossRef Google scholar
[17]
JiangX.S., YaoH.X., ZhangS.P., LuX.S., ZengW.. IEEE International Conference on Image Processing, 2013, 553
[18]
NilssonM., DahlM., ClaessonI.. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005, 429

This work has been supported by the China Scholarship Council, Postgraduate Research & Practice Innovation Program of Jiangsu Province (No.KYCX17_0776), and the Natural Science Foundation of NUPT (No.NY214039).

Accesses

Citations

Detail

Sections
Recommended

/