2022
DOI: 10.1016/j.inffus.2022.03.007
|View full text |Cite
|
Sign up to set email alerts
|

PIAFusion: A progressive infrared and visible image fusion network based on illumination aware

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
59
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 270 publications
(98 citation statements)
references
References 50 publications
0
59
0
Order By: Relevance
“…Subjective and objective evaluations of our method were carried out on two different datasets. The datasets are derived from TNO [ 35 ] and MSRS [ 36 ], and the selected images are aligned. Among them, the TNO dataset contains infrared and visible images of different military scenes, and MSRS dataset contains multiple infrared and visible images of multi-spectral road scenarios.…”
Section: Experimental Results and Comparisonsmentioning
confidence: 99%
“…Subjective and objective evaluations of our method were carried out on two different datasets. The datasets are derived from TNO [ 35 ] and MSRS [ 36 ], and the selected images are aligned. Among them, the TNO dataset contains infrared and visible images of different military scenes, and MSRS dataset contains multiple infrared and visible images of multi-spectral road scenarios.…”
Section: Experimental Results and Comparisonsmentioning
confidence: 99%
“…The MSRS dataset [ 14 ] was chosen to train the network model. This dataset was a mixture of daytime and nighttime road-related scenes, containing various elements such as pedestrians, vehicles, buildings, etc.…”
Section: Methodsmentioning
confidence: 99%
“…Xu et al [ 13 ] adopted a dense connection structure to extract image features and added multiple jump connections between the dense blocks to increase the information flow. Tang et al [ 14 ] proposed an image fusion network based on illumination perception, which could adaptively maintain the intensity distribution of salient targets according to light distribution.…”
Section: Related Workmentioning
confidence: 99%
“…To combine fusion tasks with a high-level visual task, a fusion method assisted by a high-level semantic task is proposed [55]. In addition, there are a few methods to investigate how lighting condition affects image fusion [19], [56].…”
Section: A Infrared and Visible Image Fusionmentioning
confidence: 99%
“…In recent years, with the rapid development of deep learning technology, researchers have explored fusion algorithms based on deep neural networks. Generally, the current mainstream deep fusion methods can be divided into three categories: methods based on autoencoder (AE) [18], methods based on convolutional neural network (CNN) [4], [19] and methods based on generative adversarial network (GAN) [9], [20]. As an image generation task, the existing infrared and visible image fusion methods lack in-depth exploration of the generation model.…”
mentioning
confidence: 99%