Zamzam-Fusion for dual-gain with NLM-CDDFuse for CMOS sensors using ATEF-DRPI metric

IBRAHIM ISMAIL ATEF ISMAIL , Yuchun CHANG

Journal of Measurement Science and Instrumentation ›› 2025, Vol. 16 ›› Issue (3) : 395 -405.

PDF (2400KB)
Journal of Measurement Science and Instrumentation ›› 2025, Vol. 16 ›› Issue (3) :395 -405. DOI: 10.62756/jmsi.1674-8042.2025038
Signal and image processing technology
research-article

Zamzam-Fusion for dual-gain with NLM-CDDFuse for CMOS sensors using ATEF-DRPI metric

Author information +
History +
PDF (2400KB)

Abstract

This paper presents an enhanced version of the correlation-driven dual-branch feature decomposition framework (CDDFuse) for fusing low- and high-exposure images captured by the G400BSI sensor. We introduce a novel neural long-term memory (NLM) module into the CDDFuse architecture to improve feature extraction by leveraging persistent global feature representations across image sequences. The proposed method effectively preserves dynamic range and structural details, and is evaluated using a new metric, the ATEF dynamic range preservation index (ATEF-DRPI). Experimental results on a G400BSI dataset demonstrate superior fusion quality, with ATEF-DRPI scores of 0.90, a 12.5% improvement over that of the baseline CDDFuse (0.80), indicating better detail retention in bright and dark regions. This work advances image fusion techniques for extreme lighting conditions, offering improved performance for downstream vision tasks.

Keywords

image fusion / G400BSI sensor / dynamic range preservation / low- and high-exposure fusion / deep learning

Cite this article

Download citation ▾
IBRAHIM ISMAIL ATEF ISMAIL, Yuchun CHANG. Zamzam-Fusion for dual-gain with NLM-CDDFuse for CMOS sensors using ATEF-DRPI metric. Journal of Measurement Science and Instrumentation, 2025, 16(3): 395-405 DOI:10.62756/jmsi.1674-8042.2025038

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

ZHANG X. Multi-exposure image fusion: a survey of recent advances. IEEE Transactions on Image Processing, 2020: 4567-4582.

[2]

ZHANG H, XU H,TIAN X. Deep learning for multi-modal image fusion: a review. Information Fusion, 2021, 12: 323-336.

[3]

REINHARD E, WARD G, PATTANAIK S, et al. High dynamic range imaging: acquisition, display, and image-based lighting. San Francisco: Morgan Kaufmann, 2009.

[4]

MERTENS T, KAUTZ J, VAN REETH F. Exposure fusion: a simple and practical alternative to high dynamic range photography. Computer Graphics Forum, 2009, 28(1): 161-171.

[5]

GOSHTASBY A A. Fusion of multi-exposure images. Image and Vision Computing, 2005, 23(6): 611-618.

[6]

BURT P, ADELSON E. The Laplacian pyramid as a compact image code. IEEE Transactions on Communications, 1983, 31(4): 532-540.

[7]

PRABHAKAR R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs//2017 IEEE International Conference on Computer Vision, October 22-29, 2017, Venice, Italy. New York: IEEE, 2017: 4724-4732.

[8]

ZHAO Z X, BAI H W, ZHANG J S, et al. CDDFuse: correlation-driven dual-branch feature decomposition for multi-modality image fusion//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 17-24, 2023, Vancouver, BC, Canada. New York: IEEE, 2023: 5906-5916.

[9]

IM C G, SON D M, KWON H J, et al. Multi-task learning approach using dynamic hyperparameter for multi-exposure fusion. Mathematics, 2023, 11(7): 1620.

[10]

CHEN X, ZHANG Y,TANG J. Memory-augmented deep learning for multi-modal image fusion with sequential data. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34(6): 8923–8935.

[11]

WANG Z, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 2004, 13(4): 600-612.

[12]

ZHANG H, HUANG X, XIAO T, et al. Dual-branch feature extraction and fusion network for multi-modal image fusion. IEEE Transactions on Circuits and Systems for Video Technology, 2023,8: 3924-3936.

[13]

GONZALES R C,WOODS R E. Digital image processing. New York: Pearson Education Inc., 2018.

[14]

THANGAVEL K, PALANISAMY N, MUTHUSAMY S, et al. A novel method for image captioning using multimodal feature fusion employing mask RNN and LSTM models. Soft Computing, 2023, 27(19): 14205-14218.

[15]

ZHANG Y, LIU Y, SUN P, et al. IFCNN: a general image fusion framework based on convolutional neural network. Information Fusion, 2020, 54: 99-118.

[16]

XU H, XU X, XU G, et al. Rethinking the image fusion: a fast unified image fusion network based on proportional maintenance of gradient and intensity//The 34th AAAI Conference on Artificial Intelligence, February 7-12, 2020, Hilton New York Midtown, New York, USA. New York: IEEE, 2020: 2797-12804.

[17]

XU H, MA J Y, JIANG J J, et al. U2Fusion: a unified unsupervised image fusion network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 502-518.

[18]

LI H, WU X J, DURRANI T. NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial/channel attention models. IEEE Transactions on Instrumentation and Measurement, 2020, 69(12): 9645-9656.

[19]

LI H, WU X J, KITTLER J. RFN-Nest: an end-to-end residual fusion network for infrared and visible images. Information Fusion, 2021, 73: 72-86.

[20]

TANG L F, YUAN J T, MA J Y. Image fusion in the loop of high-level vision tasks: a semantic-aware real-time infrared and visible image fusion network. Information Fusion, 2022, 82: 28-42.

[21]

LIU J Y, FAN X, HUANG Z B, et al. Target-aware dual adversarial learning and multi-scenario multi-modality benchmark to fuse infrared and visible for object detection//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 18-24, 2022, New Orleans, LA, USA. New York: IEEE, 2022: 5792-5801.

[22]

MA K D, DUANMU Z F, ZHU H W, et al. Deep guided learning for fast multi-exposure image fusion. IEEE Transactions on Image Processing, 2019, 29: 2808-2819.

PDF (2400KB)

54

Accesses

0

Citation

Detail

Sections
Recommended

/