2021
DOI: 10.1109/tim.2020.3011766
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 60 publications
(34 citation statements)
references
References 41 publications
0
34
0
Order By: Relevance
“…It is known to us that just one evaluation index could not well demonstrate the quality of fused images in quantitative assessment. Thus, for the sake of making a comprehensive evaluation for the fusion images, six popular fusion evaluation metrics are introduced in this section, namely visual information fidelity for fusion (VIFF) [ 29 , 30 , 31 , 32 , 33 ], Q S [ 34 ], average gradient (AG) [ 20 , 35 , 36 ], correlation coefficient (CC) [ 20 , 37 , 38 ], spatial frequency (SF) [ 20 , 39 , 40 , 41 ], and Q W [ 34 , 42 ]. In terms of all the six metrics, the higher the value data of the evaluation index, the better the fusion performance will be.…”
Section: Resultsmentioning
confidence: 99%
“…It is known to us that just one evaluation index could not well demonstrate the quality of fused images in quantitative assessment. Thus, for the sake of making a comprehensive evaluation for the fusion images, six popular fusion evaluation metrics are introduced in this section, namely visual information fidelity for fusion (VIFF) [ 29 , 30 , 31 , 32 , 33 ], Q S [ 34 ], average gradient (AG) [ 20 , 35 , 36 ], correlation coefficient (CC) [ 20 , 37 , 38 ], spatial frequency (SF) [ 20 , 39 , 40 , 41 ], and Q W [ 34 , 42 ]. In terms of all the six metrics, the higher the value data of the evaluation index, the better the fusion performance will be.…”
Section: Resultsmentioning
confidence: 99%
“…Li and Wu [24] employed the encode/decode network architecture and introduced the densely connected convolution layer in the encoder to extract the features of the source image to avoid losing information in the convolution process. Yang et al [25] proposed a fusion model based on visual saliency sparse representation and detail injection to avoid the loss of significant thermal radiation targets of infrared images. Zhang et al [26] propose an image fusion network based on proportional maintenance of gradient and intensity, named PMGI, which can preserve source image information through the gradient and intensity path.…”
Section: Deep Learning-based Methodsmentioning
confidence: 99%
“…Saliency-based methods [ 23 , 24 ] imitate the feature of the human visual system, which is easily attracted to prominent objects, and improve the visual effect of fused images by preserving the integrity of salient objects. Zhang et al [ 25 ] utilized salient target areas of an infrared image to determine the fusion weights, which can ensure that the fusion results have more significant thermal radiation targets.…”
Section: Related Workmentioning
confidence: 99%