2021
DOI: 10.1109/tmm.2020.2997127
|View full text |Cite
|
Sign up to set email alerts
|

AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 214 publications
(67 citation statements)
references
References 36 publications
0
49
0
Order By: Relevance
“…In order to settle these issues, Li et al [35] employed a multi-grained attention network with two independent encoders, namely MgAN-Fuse, which integrated a channel attention model into multi-scale layers of the encoder, and then multi-grained attention maps were reconstructed a fused image by the decoder. Subsequently, they extended the attention mechanism into generator and discriminator, termed as At-tentionFGAN [36], which designed two multi-scale attention networks to generate the respective attention maps of infrared and visible images, and were directly concatenated with source images for the fusion network to produce a fused result. These methods only adopt channel attention mechanism to enhance feature representation, but ignore its spatial characteristics.…”
Section: B Gan-based Fusion Methodsmentioning
confidence: 99%
“…In order to settle these issues, Li et al [35] employed a multi-grained attention network with two independent encoders, namely MgAN-Fuse, which integrated a channel attention model into multi-scale layers of the encoder, and then multi-grained attention maps were reconstructed a fused image by the decoder. Subsequently, they extended the attention mechanism into generator and discriminator, termed as At-tentionFGAN [36], which designed two multi-scale attention networks to generate the respective attention maps of infrared and visible images, and were directly concatenated with source images for the fusion network to produce a fused result. These methods only adopt channel attention mechanism to enhance feature representation, but ignore its spatial characteristics.…”
Section: B Gan-based Fusion Methodsmentioning
confidence: 99%
“…Attention is a method which tends to dispose the most instructive parts of signals among all the accessible computational resources. It has been widely-used in numerous scenes, such as action recognition [20]- [22], visual question answering [23]- [25], and adversarial learning [26]- [29]. Attention grasps long-range contextual information which can be generally applied in different tasks, including machine translation [30], image captioning [31], scene segmentation [32] and object recognition [33].…”
Section: B Attention Mechanismmentioning
confidence: 99%
“…The second type as Fig. 14 (b) has one generator and two discriminators such as [116] [117] [118]. Their inputs are the concatenated infrared and visible Images, while one discriminator compares fusion image with visible image and the other compares fusion image with infrared image.…”
Section: Infrared-visible Image Fusionmentioning
confidence: 99%
“…(a) is the architecture with one generator and one discriminator as[114][111][115]. (b) is the architecture with one generator and two discriminators as[116][117][118]. (c) is the coupled GAN architecture as[119][120].…”
mentioning
confidence: 99%