2018
DOI: 10.1142/s0219691318500182
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and visible image fusion with convolutional neural networks

Abstract: The fusion of infrared and visible images of the same scene aims to generate a composite image which can provide a more comprehensive description of the scene. In this paper, we propose an infrared and visible image fusion method based on convolutional neural networks (CNNs). In particular, a siamese convolutional network is applied to obtain a weight map which integrates the pixel activity information from two source images. This CNN-based approach can deal with two vital issues in image fusion as a whole, na… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
169
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 320 publications
(169 citation statements)
references
References 34 publications
0
169
0
Order By: Relevance
“…In this part, nine other fusion methods are recommended to compare the fusion results with ours, including curvelet transform (CVT) [71], dual-tree complex wavelet transform (DTCWT) [72], Laplacian pyramid (LP) [73], nonsubsampled contourlet transform (NSCT) [74], two-scale image fusion based on visual saliency (TSIFVS) [75], guided filtering based fusion (GFF) [76], and convolutional neural network based fusion (CNN) [24], dense block based fusion [55] which includes Dense-add and Dense-L1 according to the different fusion strategies, and a GAN based method (FusionGAN) [26]. We used the codes provided by the authors or a well-known toolbox to generate the fused images from source image pairs except for the last method which cannot get the ideal outputs directly.…”
Section: Fusion Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this part, nine other fusion methods are recommended to compare the fusion results with ours, including curvelet transform (CVT) [71], dual-tree complex wavelet transform (DTCWT) [72], Laplacian pyramid (LP) [73], nonsubsampled contourlet transform (NSCT) [74], two-scale image fusion based on visual saliency (TSIFVS) [75], guided filtering based fusion (GFF) [76], and convolutional neural network based fusion (CNN) [24], dense block based fusion [55] which includes Dense-add and Dense-L1 according to the different fusion strategies, and a GAN based method (FusionGAN) [26]. We used the codes provided by the authors or a well-known toolbox to generate the fused images from source image pairs except for the last method which cannot get the ideal outputs directly.…”
Section: Fusion Resultsmentioning
confidence: 99%
“…In the article, the authors proposed a CNN based siamese network for multi-focus image fusion and extended it for infrared and visible image fusion. Later, considering the different imaging modalities, they combined the image pyramid decomposition and a local similarity strategy [24] with the former siamese network to fuse infrared and visible images. The siamese network consisted of two branches and the weights were constrained to the same.…”
Section: Dl-based Fusion Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We choose six representative multimodal image fusion algorithms to implement performance comparison. The six image fusion methods are the DTCWT-based method (Lewis et al, 2007), the NSST-based method (Gao et al, 2013), the convolutional neural network (CNN)based method (Liu, Chen, Cheng, Peng, & Wang, 2018), the SFL-CTbased method (Feng et al, 2013), the CST-based method (Qiu et al, 2018), and the generative adversarial network (GAN)-based method (Tang et al, 2019). The parameters of these six fusion methods are set to the default values as reported in their original publications to conduct unbiased comparison.…”
Section: Compared Methodsmentioning
confidence: 99%
“…Although CNN based RGB-T salient object detection algorithms are not well investigated yet, a large number of deep neural networks with RGB-T inputs have been presented for some other computer vision or image processing tasks, such as pedestrian detection [36]- [38], image fusion [50], object tracking [51]- [53]. For example, Wagner et al [37] presented an RGB-T pedestrian detection method by fusing information with CNNs, where information from the RGB and thermal infrared images was integrated via an earlyfusion and a late-fusion based CNN architecture.…”
Section: Rgb-t Salient Object Detectionmentioning
confidence: 99%