CLC number: TP37
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2020-06-10
Cited: 0
Clicked: 5888
Citations: Bibtex RefMan EndNote GB/T7714
Rui Guo, Xuan-jing Shen, Xiao-yu Dong, Xiao-li Zhang. Multi-focus image fusion based on fully convolutional networks[J]. Frontiers of Information Technology & Electronic Engineering, 2020, 21(7): 1019-1033.
@article{title="Multi-focus image fusion based on fully convolutional networks",
author="Rui Guo, Xuan-jing Shen, Xiao-yu Dong, Xiao-li Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="21",
number="7",
pages="1019-1033",
year="2020",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1900336"
}
%0 Journal Article
%T Multi-focus image fusion based on fully convolutional networks
%A Rui Guo
%A Xuan-jing Shen
%A Xiao-yu Dong
%A Xiao-li Zhang
%J Frontiers of Information Technology & Electronic Engineering
%V 21
%N 7
%P 1019-1033
%@ 2095-9184
%D 2020
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1900336
TY - JOUR
T1 - Multi-focus image fusion based on fully convolutional networks
A1 - Rui Guo
A1 - Xuan-jing Shen
A1 - Xiao-yu Dong
A1 - Xiao-li Zhang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 21
IS - 7
SP - 1019
EP - 1033
%@ 2095-9184
Y1 - 2020
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1900336
Abstract: We propose a multi-focus image fusion method, in which a fully convolutional network for focus detection (FD-FCN) is constructed. To obtain more precise focus detection maps, we propose to add skip layers in the network to make both detailed and abstract visual information available when using FD-FCN to generate maps. A new training dataset for the proposed network is constructed based on dataset CIFAR-10. The image fusion algorithm using FD-FCN contains three steps: focus maps are obtained using FD-FCN, decision map generation occurs by applying a morphological process on the focus maps, and image fusion occurs using a decision map. We carry out several sets of experiments, and both subjective and objective assessments demonstrate the superiority of the proposed fusion method to state-of-the-art algorithms.
[1]Amin-Naji M, Aghagolzadeh A, Ezoji M, 2019. Ensemble of CNN for multi-focus image fusion. Inform Fus, 51:201- 214.
[2]Aslantas V, Kurban R, 2010. Fusion of multi-focus images using differential evolution algorithm. Expert Syst Appl, 37(12):8861-8870.
[3]Ayyalasomayajula KR, Malmberg F, Brun A, 2019. PDNet: semantic segmentation integrated with a primal-dual network for document binarization. Patt Recogn Lett, 121:52-60.
[4]Bai XZ, Zhang Y, Zhou FG, et al., 2015. Quadtree-based multi-focus image fusion using a weighted focus-measure. Inform Fus, 22:105-118.
[5]Bavirisetti DP, Dhuli R, 2016. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-Loeve transform. IEEE Sens J, 16(1):203-209.
[6]Chen Y, Qin Z, 2015. Gradient-based compressive image fusion. Front Inform Technol Electron Eng, 16(3):227- 237.
[7]Han Y, Cai YZ, Cao Y, et al., 2013. A new image fusion performance metric based on visual information fidelity. Inform Fus, 14(2):127-135.
[8]He KM, Sun J, Tang XO, 2013. Guided image filtering. IEEE Trans Patt Anal Mach Intell, 35(6):1397-1409.
[9]Huang W, Jing ZL, 2007. Multi-focus image fusion using pulse coupled neural network. Patt Recogn Lett, 28(9): 1123-1132.
[10]Huhle B, Schairer T, Jenke P, et al., 2010. Fusion of range and color images for denoising and resolution enhancement with a non-local filter. Comput Vis Image Understand, 114(12):1336-1345.
[11]Juočas L, Raudonis V, Maskeliūnas R, et al., 2019. Multi- focusing algorithm for microscopy imagery in assembly line using low-cost camera. Int J Adv Manuf Technol, 102(9-12):3217-3227.
[12]Kim KI, Kwon Y, 2010. Single-image super-resolution using sparse regression and natural image prior. IEEE Trans Patt Anal Mach Intell, 32(6):1127-1133.
[13]LeCun Y, Bengio Y, 1995. Convolutional networks for images, speech, and time-series. In: Arbib MA (Ed.), The Handbook of Brain Theory and Neural Networks. MIT Press, Cambridge, p.255-258.
[14]Li ST, Kwok JT, Wang YN, 2002. Multifocus image fusion using artificial neural networks. Patt Recogn Lett, 23(8): 985-997.
[15]Li ST, Kang XD, Hu JW, 2013a. Image fusion with guided filtering. IEEE Trans Image Process, 22(7):2864-2875.
[16]Li ST, Kang XD, Hu JW, et al., 2013b. Image matting for fusion of multi-focus images in dynamic scenes. Inform Fus, 14(2):147-162.
[17]Li ST, Kang XD, Fang LY, et al., 2017. Pixel-level image fusion: a survey of the state of the art. Inform Fus, 33: 100-112.
[18]Liu Y, Chen X, Peng H, et al., 2017. Multi-focus image fusion with a deep convolutional neural network. Inform Fus, 36:191-207.
[19]Mustafa HT, Yang J, Zareapoor M, 2019. Multi-scale convolutional neural network for multi-focus image fusion. Image Vis Comput, 85:26-35.
[20]Piella G, Heijmans H, 2003. A new quality metric for image fusion. Proc Int Conf on Image Processing, p.173-176.
[21]Qu GH, Zhang DL, Yan PF, 2002. Information measure for performance of image fusion. Electron Lett, 38(7):313- 315.
[22]Ren WQ, Zhang JG, Xu XY, et al., 2019. Deep video dehazing with semantic segmentation. IEEE Trans Image Process, 28(4):1895-1908.
[23]Saha A, Bhatnagar G, Wu QMJ, 2013. Mutual spectral residual approach for multifocus image fusion. Dig Signal Process, 23(4):1121-1135.
[24]Saleem A, Beghdadi A, Boashash B, 2011. Image quality metrics based multifocus image fusion. 3rd European Workshop on Visual Information Processing, p.77-82.
[25]Shelhamer E, Long J, Darrell T, 2017. Fully convolutional networks for semantic segmentation. IEEE Trans Patt Anal Mach Intell, 39(4):640-651.
[26]Wang B, Yuan XY, Gao XB, et al., 2019. A hybrid level set with semantic shape constraint for object segmentation. IEEE Trans Cybern, 49(5):1558-1569.
[27]Xydeas CS, Petrovic V, 2000. Objective image fusion performance measure. Electron Lett, 36(4):308-309.
[28]Yang Y, Huang SY, Gao JF, et al., 2014. Multi-focus image fusion using an effective discrete wavelet transform based algorithm. Meas Sci Rev, 14(2):102-108.
[29]Yang Y, Tong S, Huang SY, et al., 2015. Multifocus image fusion based on NSCT and focused area detection. IEEE Sens J, 15(5):2824-2838.
[30]Zhang BH, Lu XQ, Jia WT, 2013. A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain. Optik, 124(20):4104-4109.
[31]Zhang Q, Levine MD, 2016. Robust multi-focus image fusion using multi-task sparse representation and spatial context. IEEE Trans Image Process, 25(5):2045-2058.
[32]Zhao WD, Wang D, Lu HC, 2019. Multi-focus image fusion with a natural enhancement via a joint multi-level deeply supervised convolutional neural network. IEEE Trans Circ Syst Video Technol, 29(4):1102-1115.
[33]Zhou ZQ, Li S, Wang B, 2014. Multi-scale weighted gradient- based fusion for multi-focus images. Inform Fus, 20:60- 72.
[34]Zhu XB, Zhang XM, Zhang XY, et al., 2019. A novel framework for semantic segmentation with generative adversarial network. J Vis Commun Image Represent, 58:532-543.
Open peer comments: Debate/Discuss/Question/Opinion
<1>