2023
DOI: 10.1016/j.inffus.2022.12.007
|View full text |Cite
|
Sign up to set email alerts
|

AT-GAN: A generative adversarial network with attention and transition for infrared and visible image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
12
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 48 publications
(17 citation statements)
references
References 34 publications
0
12
0
Order By: Relevance
“…One of the recent developments in image fusion is the use of deep learning techniques. AT‐GAN 2 proposes a generative adversarial network (GAN) with intensity attention modules and semantic transition modules to explore key information in infrared and visible modals. Han et al 1 propose to achieve targeted level image with a scene texture attention module.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…One of the recent developments in image fusion is the use of deep learning techniques. AT‐GAN 2 proposes a generative adversarial network (GAN) with intensity attention modules and semantic transition modules to explore key information in infrared and visible modals. Han et al 1 propose to achieve targeted level image with a scene texture attention module.…”
Section: Related Workmentioning
confidence: 99%
“…Detecting objects in fused images of visible and infrared modals is a significant task for many applications like traffic surveillance and military reconnaissance. However, in this area, most of research focus on proposing better image fusion methods, 1,2 or designing better infrared object detection. 3,4 In other words, the significance of object detection on fusion images is underrated.…”
Section: Introductionmentioning
confidence: 99%
“…10,29 Deep learning-based methods, such as RFNet, AT-GAN, and SemLA, utilize deep neural networks to promote multimodal image registration accuracy. 15,36,37 However, deep learning-based methods depend greatly on the quality of training data, which limits their performance in the registration of coral reefs with less texture.…”
Section: Introductionmentioning
confidence: 99%
“…For example, SIFT was utilized to guarantee preregistration results close to the ground truth, and mutual information was utilized during the fine-tuning process to achieve the most precise registration results 10 , 29 . Deep learning-based methods, such as RFNet, AT-GAN, and SemLA, utilize deep neural networks to promote multimodal image registration accuracy 15 , 36 , 37 . However, deep learning-based methods depend greatly on the quality of training data, which limits their performance in the registration of coral reefs with less texture.…”
Section: Introductionmentioning
confidence: 99%
“…Saavedra et al (2022) combined the multithreshold method with region growing for image segmentation and detection. With the development of deep learning (Rao et al 2023;Xie et al 2023), it has also been applied to BP detection. Yang & Li (2019) proposed a masked region convolutional neural network model called GBP-MRCNN for the morphological classification of Gband bright points (GBPs).…”
Section: Introductionmentioning
confidence: 99%