2023
DOI: 10.1109/tim.2023.3237814
|View full text |Cite
|
Sign up to set email alerts
|

FusionGRAM: An Infrared and Visible Image Fusion Framework Based on Gradient Residual and Attention Mechanism

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…Therefore, when the window w is certain, the original image with large covariance will have a larger weight. Moreover, in order to better extract the details as well as the texture in each original image, and at the same time to highlight the infrared thermal information of the target of interest, the detail loss and pixel loss from the literature [16] are also added to the loss function. The detail loss assumes that the texture information in the fused image is the one that corresponds to the largest difference in gradient and the intensity information is the pixel with the largest brightness in the original image.…”
Section: Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, when the window w is certain, the original image with large covariance will have a larger weight. Moreover, in order to better extract the details as well as the texture in each original image, and at the same time to highlight the infrared thermal information of the target of interest, the detail loss and pixel loss from the literature [16] are also added to the loss function. The detail loss assumes that the texture information in the fused image is the one that corresponds to the largest difference in gradient and the intensity information is the pixel with the largest brightness in the original image.…”
Section: Loss Functionmentioning
confidence: 99%
“…In conventional image fusion algorithms, various transformation methods are usually used to extract features. The process that generates a huge amount of redundant information and requires complex fusion model design [6,[16][17][18]. In recent years, with the development of artificial intelligence technology, deep learning techniques have also been widely used in the field of image fusion [19].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, it is a tendency to build performance-efficient deep neural networks for various image fusion tasks due to their strong nonlinear learning abilities. Learning-based fusion architectures, such as autoencoder (AE) [ 13 , 14 , 16 , 19 ], convolutional neural network (CNN) [ 15 , 18 , 20 ] and generative adversarial network (GAN) [ 21 , 22 , 24 , 27 , 29 ] have witnessed obvious improvements in fusion performance, but their single-scale frameworks can hardly capture the full-scale features of the real-world targets and fail to make the fused images photorealistic. More importantly, most methods directly capitalize on the features extracted in the last layer to reconstruct fused images, whereas earlier features do not.…”
Section: Technical Backgroundsmentioning
confidence: 99%
“…Reference [ [14] , [15] , [16] , [17] ] employed convolution kernels of different sizes to extract common and unique features of source images. Reference [ [18] , [19] , [20] ] captured the multilevel features of the source images via residual learning. Moreover, modern GAN-based approaches [ [21] , [22] , [23] , [24] , [25] , [26] , [27] , [28] , [29] , [30] ] exploit multi-granularity convolution kernels of the same feature level, yielding different receptive fields and in turn improving fusion performance.…”
Section: Introductionmentioning
confidence: 99%