2022
DOI: 10.1016/j.patcog.2022.108929
|View full text |Cite
|
Sign up to set email alerts
|

Infrared and visible image fusion via parallel scene and texture learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 30 publications
0
4
0
Order By: Relevance
“…This framework, even with a relatively small network size, proposes a network that still possesses a large receptive field. The literature [38] suggested a trainable hybrid network to enhance similar degradation problems, assessed the global content of low-light inputs using an encoderdecoder network, and output a new spatially variational RNN as edge flow, exhibiting favorable image fusion performance. Among these methods, some researchers put forward a fusion framework based on content and detail branches.…”
Section: Fusion Methods Based On Rnnsmentioning
confidence: 99%
“…This framework, even with a relatively small network size, proposes a network that still possesses a large receptive field. The literature [38] suggested a trainable hybrid network to enhance similar degradation problems, assessed the global content of low-light inputs using an encoderdecoder network, and output a new spatially variational RNN as edge flow, exhibiting favorable image fusion performance. Among these methods, some researchers put forward a fusion framework based on content and detail branches.…”
Section: Fusion Methods Based On Rnnsmentioning
confidence: 99%
“…Reproduced with permission. [256] Copyright 2023, Elsevier. Reproduced with permission [275] Copyright 2023, American Chemical Society.…”
Section: Agricultural Protectionmentioning
confidence: 99%
“…Using texture learning and parallel scene, Xu and co‐workers present image fusion IR–vis approach (Figure 23c), leveraging detail branch and deep neural networks' content branch to concurrently gather various properties of the source images before reconstructing the combined image. [ 256 ] In images with visible insufficient light (00054N and 01042N), pedestrians cannot be detected and seen, and stationary autos (00689 N and 01061 N) cannot be recognized due to their little heat at night. Maintaining contrast and retaining structural texture detail, merged images provide improved detecting results, detected cars and pedestrians with improved confidence of their locations.…”
Section: Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…DL-based methods can fuse infrared and visible images in an end-to-end way with their powerful nonlinear fitting ability [9]. DL methods include CNN-based methods, generative adversarial network (GAN)-based methods, transformer-based methods [21] and the other methods [22], [23]. CNN-based methods extract infrared and visible image features by designing parallel convolution kernels [24].…”
Section: Introductionmentioning
confidence: 99%