2022 IEEE International Conference on Multimedia and Expo (ICME) 2022
DOI: 10.1109/icme52920.2022.9859770
|View full text |Cite
|
Sign up to set email alerts
|

PanFormer: A Transformer Based Model for Pan-Sharpening

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(4 citation statements)
references
References 16 publications
0
4
0
Order By: Relevance
“…HyperTransformer can extract cross-modality dependencies in a global manner, which is completely different to convolution. Moreover, Fusformer [31] and Panformer [32] have further verified the performance of transformer architecture for pansharpening. While several Transformerbased HS pansharpening methods have been explored, challenges persist (Table 2):…”
mentioning
confidence: 73%
“…HyperTransformer can extract cross-modality dependencies in a global manner, which is completely different to convolution. Moreover, Fusformer [31] and Panformer [32] have further verified the performance of transformer architecture for pansharpening. While several Transformerbased HS pansharpening methods have been explored, challenges persist (Table 2):…”
mentioning
confidence: 73%
“…We select nine comparative methods, including six SOTA deep learning‐based methods. Among them, PNN [14] and MSDCNN [16] are CNN‐based methods, Pan‐GAN [18] and UCGAN [19] are GAN‐based methods, and PanFormer [34] and DR‐Net [35] are transformer‐based methods. For traditional methods, we select IHS [5], SFIM [10], and BDSD [54], which represent the CS‐based method, the MRA‐based method, and the VO‐based method, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…In the context of pansharpening, the use of transformers is relatively new, several recent methods have emerged that leverage transformer‐based architectures to improve the quality of pan‐sharpened images. Panformer [34] employs a transformer architecture to learn the complex relationships between low‐resolution multispectral images and high‐resolution panchromatic images. By utilizing the self‐attention mechanism, Panformer is capable of effectively capturing long‐range dependencies, leading to better spatial and spectral preservation in pan‐sharpened images.…”
Section: Related Workmentioning
confidence: 99%
“…Like the proposed CSFNet, these two methods are trained on reduced and full resolution datasets, respectively. For the supervised methods, Pannet [17], TFNet [44], and PanFormer [45] are selected to compare the fusion performance. For these three methods, due to the requirement of the reference image to supervise the training process, we only train them on the reduced resolution dataset.…”
Section: Comparison Methods and Quality Measures Metricsmentioning
confidence: 99%