2020
DOI: 10.1109/access.2020.3037770
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss

Abstract: The fusion of infrared and visible images can utilize the indication characteristics and the textural details of source images to realize the all-weather detection. The deep learning (DL) based fusion solutions can reduce the computational cost and complexity compared with traditional methods since there is no need to design complex feature extraction methods and fusion rules. There are no standard reference images and the publicly available infrared and visible image pairs are scarce. Most supervised DL-based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 47 publications
0
11
0
Order By: Relevance
“…In our earlier reading of the literature, a large number of papers [ 2 , 3 , 4 , 5 , 9 , 11 , 12 , 13 , 14 , 15 , 16 , 19 , 25 , 26 , 27 , 28 , 29 , 30 ] used the TNO dataset for the training and testing of the model. Therefore, the same TNO dataset is used in this paper.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 3 more Smart Citations
“…In our earlier reading of the literature, a large number of papers [ 2 , 3 , 4 , 5 , 9 , 11 , 12 , 13 , 14 , 15 , 16 , 19 , 25 , 26 , 27 , 28 , 29 , 30 ] used the TNO dataset for the training and testing of the model. Therefore, the same TNO dataset is used in this paper.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…Therefore, fusion performance analysis based on quantitative evaluation is essential and complements the subjective evaluation. Drawing on the evaluation metrics found in most fusion papers [ 3 , 4 , 5 , 11 , 12 , 14 , 15 , 16 , 25 , 26 , 27 ], four mainstream evaluation metrics have been selected for this paper, including the Peak Signal Noise Ratio (PSNR) [ 26 ], Structure Similarity Index (SSIM) [ 4 , 12 , 15 , 16 , 27 ], Spatial Frequency (SF) [ 3 , 5 , 11 , 15 , 25 ] and Mutual Information (MI) [ 3 , 4 , 5 , 14 , 16 ]. They are defined as follows: where z represents the difference between the maximum and minimum greyscale values of the ideal reference image, usually 255.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast to the “reconstruction training” mode, the feature fusion part is usually designed as dual-light feature concatenation, and the interaction is implicitly implemented in the subsequent image reconstruction process [ 11 , 14 , 21 ]. In addition, for the GAN commonly used in this mode, the fusion model is always trained simultaneously with the designed discriminators [ 10 , 12 , 22 , 23 , 24 , 25 ]. In the process of adversarial gaming, it simultaneously improves the respective capabilities of both sides to achieve the improvement of the fusion effect.…”
Section: Related Workmentioning
confidence: 99%