2021
DOI: 10.1016/j.ijleo.2021.168084
|View full text |Cite
|
Sign up to set email alerts
|

An infrared and visible image fusion method based on VGG-19 network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…In this paper, images of the same electronic PCB are combined using X-rays and optical images. Through this process, salient and relevant information from hidden parts of images acquired by an X-ray machine (for instance, inside a chip) is fused with details from optical images [16].…”
Section: Image Fusionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, images of the same electronic PCB are combined using X-rays and optical images. Through this process, salient and relevant information from hidden parts of images acquired by an X-ray machine (for instance, inside a chip) is fused with details from optical images [16].…”
Section: Image Fusionmentioning
confidence: 99%
“…The fusion strategy can also be designed with deep learning. In this paper, we employ the innovative fusion method proposed by Jingwen Zhou et al [16] that is presented for infrared and visible image fusion based on the VGG-19 model. In this method, unlike Li et al's proposed approach in [19], the source image does not need to be split into basic and detailed parts.…”
Section: Image Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…First, CNNs are trained on visible, infrared, and fused images to acquire the requisite weightings for fusion [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 ]. Second, it leverages pre-trained neural network models to only extract features and obtain weight maps from the images, thereby achieving the fusion objective [ 30 , 31 , 32 , 33 ]; Generative Adversarial Network (GAN)-based methods transform the task of integrating visible and infrared images into an adversarial process, characterized by the interplay between a generator and a discriminator. Their objective is to combine visible and infrared images through the generator, at the same time tasking the discriminator with evaluating the sufficiency of visible and infrared information within the fused image [ 34 , 35 , 36 , 37 , 38 , 39 , 40 ]; Encoder-decoder-based networks consist of two main components: an encoder and a decoder.…”
Section: Introductionmentioning
confidence: 99%
“…First, CNNs are trained on visible, infrared, and fused images to acquire the requisite weightings for fusion [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 ]. Second, it leverages pre-trained neural network models to only extract features and obtain weight maps from the images, thereby achieving the fusion objective [ 30 , 31 , 32 , 33 ];…”
Section: Introductionmentioning
confidence: 99%