2019
DOI: 10.1016/j.inffus.2018.09.004
|View full text |Cite
|
Sign up to set email alerts
|

FusionGAN: A generative adversarial network for infrared and visible image fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
457
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 1,068 publications
(458 citation statements)
references
References 38 publications
0
457
0
1
Order By: Relevance
“…Li et al [45] proposed a CNN network with the dense block structure to solve the infrared and visible image fusion problem. To improve the perceptual quality of the fused image, Ma [46] proposed a generative adversarial network (GAN), called FusionGAN, for infrared and visual image fusion.…”
Section: Multi-modal Image Fusionmentioning
confidence: 99%
“…Li et al [45] proposed a CNN network with the dense block structure to solve the infrared and visible image fusion problem. To improve the perceptual quality of the fused image, Ma [46] proposed a generative adversarial network (GAN), called FusionGAN, for infrared and visual image fusion.…”
Section: Multi-modal Image Fusionmentioning
confidence: 99%
“…In recent years, deep learning methods have been widely used to automatically learn features from raw data and make successful computer vision applications, 22,23 especially in HAR. [24][25][26] Nonetheless, deep learning methods usually need large-scale training set, which is difficult because of economic and technical limit.…”
Section: Introductionmentioning
confidence: 99%
“…Moreover, experimental results in complicated ambient conditions show that the proposed algorithm in this paper outperforms state-of-the-art algorithms in both qualitative and quantitative evaluations, and this study can extend to the field of other-type image fusion.Remote Sens. 2020, 12, 781 2 of 21 methods [16][17][18], deep learning-based methods [2,19,20] and other methods [21,22]. Next, the ideas of these methods are briefly introduced.The subspace-based method first projects high-dimensional source image into low-dimensional space, and then fuses the information contained in the subspace, such as principal component analysis (PCA) [10], independent component analysis (ICA) [11], robust principal component analysis (RPCA) [12] and so on.…”
mentioning
confidence: 99%
“…Deep learning-based methods are to imitate the behavioral perception mechanism of the human brain, which has strong adaptability and feature extraction ability [2]. However, this kind of method is computationally intensive and requires high hardware equipment [19,20]. In addition, there are other ideas and perspectives that inspire new image fusion method, such as entropy [21], total variation [22] and so on.With the development of computer vision technology, saliency-based methods have been successfully implemented to infrared and visible image fusion because it effectively utilizes the complementary information of the source image.…”
mentioning
confidence: 99%
See 1 more Smart Citation