2020
DOI: 10.3390/app10020554
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and Visible Image Fusion with a Generative Adversarial Network and a Residual Network

Abstract: Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(11 citation statements)
references
References 63 publications
0
11
0
Order By: Relevance
“…This is also the reason why few scholars use infrared and visible image pairs to train deep learning models. Referring to the ideas of Li et al [20] and Xu et al [34], 566 groups of images are processed by clipping operation to extend data sets. Namely, each image is divided into several image blocks with the same size (128 × 128 in our method).…”
Section: Preparation Of Data Setsmentioning
confidence: 99%
“…This is also the reason why few scholars use infrared and visible image pairs to train deep learning models. Referring to the ideas of Li et al [20] and Xu et al [34], 566 groups of images are processed by clipping operation to extend data sets. Namely, each image is divided into several image blocks with the same size (128 × 128 in our method).…”
Section: Preparation Of Data Setsmentioning
confidence: 99%
“…[53] went a step further and defined a trainable fusion layer. Refs [54][55][56] applied the GAN approach for fusing infrared and visible images using end-to-end neural networks. Ref.…”
Section: Unsupervised End-to-end Deep Learning-based Fusion Approachesmentioning
confidence: 99%
“…According to the subjective evaluation, it can be seen that the fusion results only pay attention to the visible light information and lose the thermal target information of the infrared image. Therefore, in order to retain more effective information of the source image and the correlation between the source image and the fusion image, the content loss function designed in this paper includes gradient loss and similarity loss, namely, Since the thermal radiation information of the infrared image is characterized by its pixel intensity and the texture detail information of the visible image is characterized by its gradient [23], the image intensity and gradient are calculated and can be expressed as: In the design of content loss function, the idea of reference [24] is introduced. Image structural similarity is an image quality measure that calculates the differences between images.…”
Section: Figure 1 Dcgan Frameworkmentioning
confidence: 99%