2021
DOI: 10.1109/tim.2020.3029360
|View full text |Cite
|
Sign up to set email alerts
|

Multigrained Attention Network for Infrared and Visible Image Fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
25
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 37 publications
(25 citation statements)
references
References 39 publications
0
25
0
Order By: Relevance
“…In addition, some researchers had exploited the generative adversarial network (GAN) [32][33][34][35][36] for image fusion, and achieved satisfactory results to some extent. Typically, Ma et al firstly presented FusionGAN [13] where the adversarial learning network including a generator and a discriminator was proposed.…”
Section: B Deep Learning-based Fusion Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, some researchers had exploited the generative adversarial network (GAN) [32][33][34][35][36] for image fusion, and achieved satisfactory results to some extent. Typically, Ma et al firstly presented FusionGAN [13] where the adversarial learning network including a generator and a discriminator was proposed.…”
Section: B Deep Learning-based Fusion Methodsmentioning
confidence: 99%
“…Ulteriorly, Yang et al [34] constructed a texture conditional generative adversarial network to capture texture map, and further proposed the squeeze-and-excitation module to highlight texture information. Li et al presented a multi-grained attentional network, namely MgAN-Fuse [35], which integrated attention modules into the encoder-decoder network to capture the context information in the generator. Meanwhile, they also introduced AttentionFGAN [36] where a multi-scale attention module was integrated into both generator and discriminator.…”
Section: B Deep Learning-based Fusion Methodsmentioning
confidence: 99%
“…In order to settle these issues, Li et al [35] employed a multi-grained attention network with two independent encoders, namely MgAN-Fuse, which integrated a channel attention model into multi-scale layers of the encoder, and then multi-grained attention maps were reconstructed a fused image by the decoder. Subsequently, they extended the attention mechanism into generator and discriminator, termed as At-tentionFGAN [36], which designed two multi-scale attention networks to generate the respective attention maps of infrared and visible images, and were directly concatenated with source images for the fusion network to produce a fused result.…”
Section: B Gan-based Fusion Methodsmentioning
confidence: 99%
“…The second type as Fig. 14 (b) has one generator and two discriminators such as [116] [117] [118]. Their inputs are the concatenated infrared and visible Images, while one discriminator compares fusion image with visible image and the other compares fusion image with infrared image.…”
Section: Infrared-visible Image Fusionmentioning
confidence: 99%
“…(a) is the architecture with one generator and one discriminator as[114][111][115]. (b) is the architecture with one generator and two discriminators as[116][117][118]. (c) is the coupled GAN architecture as[119][120].…”
mentioning
confidence: 99%