PDF
Abstract
A decision map contains complete and clear information about the image to be fused, which is crucial to various image fusion issues, especially multi-focus image fusion. However, in order to get a satisfactory image fusion effect, getting a decision map is very necessary and usually difficult to finish. In this letter, we address this problem with convolutional neural network (CNN), aiming to get a state-of-the-art decision map. The main idea is that the max-pooling of CNN is replaced by a convolution layer, the residuals are propagated backwards by gradient descent, and the training parameters of the individual layers of the CNN are updated layer by layer. Based on this, we propose a new all CNN (ACNN)-based multi-focus image fusion method in spatial domain. We demonstrate that the decision map obtained from the ACNN is reliable and can lead to high-quality fusion results. Experimental results clearly validate that the proposed algorithm can obtain state-of-the-art fusion performance in terms of both qualitative and quantitative evaluations.
Cite this article
Download citation ▾
Chao-ben Du, She-sheng Gao.
Multi-focus image fusion with the all convolutional neural network.
Optoelectronics Letters 71-75 DOI:10.1007/s11801-018-7207-x
| [1] |
DuC, GaoS. IEEE Access, 2017, 5: 15750
|
| [2] |
BurtP, AdelsonE. IEEE Trans. Commun., 1983, 31: 532
|
| [3] |
ToetA. Pattern Recognit. Lett., 1989, 9: 255
|
| [4] |
LiH, ManjunathB, MitraS. Graphical Models Image Process., 1995, 57: 235
|
| [5] |
LewisJ, CallaghanR O, NikolovS, BullD, CanagarajahN. Inf. Fusion, 2007, 8: 119
|
| [6] |
ZhangQ, GuoB. Signal Process., 2009, 89: 1334
|
| [7] |
PiellaG. Inf. Fusion, 2003, 4: 259
|
| [8] |
LiX, LiH, YuZ, KongY O. pt. Eng., 2015, 54: 073
|
| [9] |
LiuY, LiuS, WangZ. Inf. Fusion, 2015, 24: 147
|
| [10] |
LiuZ, ChaiY, YinH, ZhouJ, ZhuZ. Inf. Fusion, 2017, 35: 102
|
| [11] |
HuangW, JingZ. Pattern Recognit. Lett., 2007, 28: 493
|
| [12] |
LiS, KwokJ, WangY. Inf. Fusion, 2001, 2: 169
|
| [13] |
LiS, KwokJ, WangY. Pattern Recognit. Lett., 2002, 23: 985
|
| [14] |
AslantasV, KurbanR. Appl., 2010, 37: 8861
|
| [15] |
DeI, ChandaB. Inf. Fusion, 2013, 14: 136
|
| [16] |
BaiX, ZhangY, ZhouF, XueB. Inf. Fusion, 2015, 22: 105
|
| [17] |
LiM, CaiW, TanZ. Pattern Recognit. Lett., 2006, 27: 1948
|
| [18] |
LiS, YangB. Image Vis. Comput., 2008, 26: 971
|
| [19] |
LiS, KangX, HuJ. IEEE Trans., 2015, 22: 2864
|
| [20] |
LiuY, LiuS, WangZ. Inf. Fusion, 2013, 23: 139
|
| [21] |
ZhouZ, LiS, WangB. Inf. Fusion, 2014, 20: 60
|
| [22] |
https://en.wikipedia.org/wiki/Deep_learning
|
| [23] |
GoodfellowI J, WardefarleyD, MirzaM, CourvilleA, BengioY. Maxout Networks, Computer Science, 2013, 1319
|
| [24] |
StollengaM F, MasciJ, GomezF, SchmidhuberJ. Advances in Neural Information Processing Systems, 2014, 4: 3545
|
| [25] |
ZeilerM D, FergusR. Stochastic Pooling for Regularization of Deep Convolutional Neural Networks, 2013,
|
| [26] |
LeeC Y, XieS, GallagherP, ZhangZ, TuZ. Deeply Supervised Nets, In Deep Learning and Representation Learning Workshop, 2014,
|
| [27] |
SpringenbergJT, DosovitskiyA, BroxT, RiedmillerM. Striving for Simplicity: The All Convolutional Net, 2014,
|
| [28] |
SimonyanK, ZissermanA. Very Deep Convolutional Networks for Large-Scale Image Recognition, 2014,
|
| [29] |
LeCunY, BottouL, BengioY. Proceedings of the IEEE, 1998, 86: 2278
|
| [30] |
GulcehreC a, ChoK. Pascanu Razvan and Bengio Yoshua, Learned-norm Pooling for Deep Feedforward and Recurrent Neural Networks, 2014,
|
| [31] |
JiaY, HuangC, DarrellT. Beyond Spatial Pyramids: Receptive Field Learning for Pooled Image Features, Computer Vision and Pattern Recognition, 2012, 3370
|
| [32] |
BehnkeS. Hierarchical Neural Networks for Image Interpretation, Lecture Notes in Computer Science, 2003, 2766: 1345
|
| [33] |
LiS, KwokJ, WangY. Pattern Recognit. Lett., 2002, 23: 985
|
| [34] |
GaoJ, XuL. Neural Process. Lett., 2016, 43: 805
|
| [35] |
LiuY, ChenX, PengH, WangZ. Inf. Fusion, 2017, 36: 191
|
| [36] |
ZhangY, BaiX, WangT. Inf. Fusion, 2017, 35: 81
|
| [37] |
GuoD, YanJW, QuX. Opt. Commun., 2015, 338: 138
|