Infrared and visible image fusion based on multi-level detail enhancement and generative adversarial network

Xiangrui Tian , Xiaohan Xianyu , Zhimin Li , Tong Xu , Yinjun Jia

Intelligence & Robotics ›› 2024, Vol. 4 ›› Issue (4) : 524 -43.

PDF
Intelligence & Robotics ›› 2024, Vol. 4 ›› Issue (4) :524 -43. DOI: 10.20517/ir.2024.30
Research Article
Research Article

Infrared and visible image fusion based on multi-level detail enhancement and generative adversarial network

Author information +
History +
PDF

Abstract

Infrared and visible image fusion technology has a wide range of applications in many fields such as target detection and tracking. Existing image fusion methods often overlook the scale hierarchical structure information of features, with local and global features not being closely interconnected. Typically, improvements focus on the network structure and loss functions, while the intimate connection between the quality of the source images and the feature extraction network is often neglected. The aforementioned issues lead to artifacts and blurring of fused images; besides, the detailed edge information can not be well reflected. Therefore, a method of infrared and visible image fusion based on a generative adversarial network (GAN) with multi-level detail enhancement is proposed in this paper. Firstly, the edge information of the input source image is enriched by the multi-level detail enhancement method, which improves the image quality and makes it more conducive to the learning of feature extraction network. Secondly, the residual-dense and multi-scale modules are designed in the generator and the connection between local and global features is established to ensure the transmissibility and coherence of the feature information. Finally, by designing the loss function and dual discriminator constraints to constrain the fusion image, more structure and detail information are added in continuous confrontation. The experimental results show that the fused image contains more detailed texture information and prominent thermal radiation targets. It also outperforms other fusion methods in terms of average gradient (AG), spatial frequency (SF) and edge intensity (EI) metrics, with values surpassing the sub-optimal metrics of 65.41%, 65.09% and 55.22%, respectively.

Keywords

Image fusion / multi-level detail enhancement / generative adversarial networks / deep feature extraction

Cite this article

Download citation ▾
Xiangrui Tian, Xiaohan Xianyu, Zhimin Li, Tong Xu, Yinjun Jia. Infrared and visible image fusion based on multi-level detail enhancement and generative adversarial network. Intelligence & Robotics, 2024, 4(4): 524-43 DOI:10.20517/ir.2024.30

登录浏览全文

4963

注册一个新账户 忘记密码

References

AI Summary AI Mindmap
PDF

89

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/