2022
DOI: 10.1364/ol.466191
|View full text |Cite
|
Sign up to set email alerts
|

TIPFNet: a transformer-based infrared polarization image fusion network

Abstract: The fusion of infrared intensity and polarization images can generate a single image with better visible perception and more vital information. Existing fusion methods based on a convolutional neural network (CNN), with local feature extraction, have the limitation of fully exploiting salient target features of polarization. In this Letter, we propose a transformer-based deep network to improve the performance of infrared polarization image fusion. Compared with existing CNN-based methods, our model can encode… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…A neural network has the advantages of being adaptable, fault-tolerant, and noise resistant [9], which has led to the successful application of neural networks in various fields, including image fusion. In 2021, Wang et al [10] proposed an image fusion algorithm combining NSCT and CNN to fuse the high-frequency and low-frequency image information, respectively; in the same year, Zhang et al [11] proposed a novel deep neural network to solve the fusion problem with a self-learning strategy for polarization image fusion; in 2022, Xu et al [12] proposed a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion; in the same year, Tang et al [13] incorporated image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named SuperFusion; Li et al [14] proposed a Transformer-based deep neural network to improve the performance of IR polarization image fusion; Ma et al [15] proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer, termed as SwinFusion; Xu et al [16] proposed a novel unsupervised polarization and intensity image fusion network via pixel information guidance and attention mechanism, named PAPIF.…”
Section: Introductionmentioning
confidence: 99%
“…A neural network has the advantages of being adaptable, fault-tolerant, and noise resistant [9], which has led to the successful application of neural networks in various fields, including image fusion. In 2021, Wang et al [10] proposed an image fusion algorithm combining NSCT and CNN to fuse the high-frequency and low-frequency image information, respectively; in the same year, Zhang et al [11] proposed a novel deep neural network to solve the fusion problem with a self-learning strategy for polarization image fusion; in 2022, Xu et al [12] proposed a novel unified and unsupervised end-to-end image fusion network, termed as U2Fusion; in the same year, Tang et al [13] incorporated image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named SuperFusion; Li et al [14] proposed a Transformer-based deep neural network to improve the performance of IR polarization image fusion; Ma et al [15] proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer, termed as SwinFusion; Xu et al [16] proposed a novel unsupervised polarization and intensity image fusion network via pixel information guidance and attention mechanism, named PAPIF.…”
Section: Introductionmentioning
confidence: 99%
“…Unsupervised models extract features from different bands and perform feature fusion according to a designed specific fusion strategy, and finally recover the fused images using a decoder [16,[24][25][26][27]. In intensity-polarization image fusion, a neural network model extracts the texture information from the intensity image and the salient information from the polarization image for feature fusion, which is subsequently recovered and reconstructed into a fused image with both original image information [28][29][30]. However, these image fusion methods are either only for multi-band images or only for intensity and polarization images at a single wavelength.…”
Section: Introductionmentioning
confidence: 99%