2023
DOI: 10.1080/01431161.2023.2225228
|View full text |Cite
|
Sign up to set email alerts
|

SiamixFormer: a fully-transformer Siamese network with temporal Fusion for accurate building detection and change detection in bi-temporal remote sensing images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 27 publications
0
1
0
Order By: Relevance
“…We compared our method with FC-EF, FC-Siam-conc, FC-Siam-diff [19], STANet [31], BIT [32], L-UNet [30], DSIFN [33], SNUNet [28], RDP-Net [20], Changer [47], SiamixFormer [54], BAN [45], and LightCDNet [65]. These methods represent deep learning-based approaches in the field of CD.…”
Section: Comparison With Sota Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We compared our method with FC-EF, FC-Siam-conc, FC-Siam-diff [19], STANet [31], BIT [32], L-UNet [30], DSIFN [33], SNUNet [28], RDP-Net [20], Changer [47], SiamixFormer [54], BAN [45], and LightCDNet [65]. These methods represent deep learning-based approaches in the field of CD.…”
Section: Comparison With Sota Methodsmentioning
confidence: 99%
“…These modules neglect to consider the spatial relationships among bi-temporal data. Mohammadian et al [54] used the Key, Query, and Value matrices from the self-attention mechanism within Transformer to fuse bitemporal features and proposed SiamxFormer. The fusion result is obtained by a temporal transformer.…”
Section: Feature Fusionmentioning
confidence: 99%
“…To address this challenge and achieve better representation capabilities of deep features, designing deeper and more complex feature extraction networks has gotten significant attention as a primary research focus. Many researchers have put forward several enhanced models to achieve more discriminative feature representations, such as combining Generative Adversarial Networks (GAN) [38][39][40] or Recurrent Neural Networks (RNN) [41,42], or using feature extraction models based on the Transformer architecture [43][44][45] to expand the receptive field. Some studies focus on the effective utilization of features, such as using spatial or channel attention mechanisms [30][31][32]36,46] or employing multi-scale feature fusion for feature enhancement [25,29,[47][48][49].…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, the leveraging of open-source datasets [7][8][9][10][11][12][13][14][15][16] in the field of BCD research has resulted in a surge of state-of-the-art (SOTA) methods. These methods predominantly rely on deep convolutional neural networks (DCNNs) [17][18][19][20][21][22][23][24] and transformer models [25][26][27][28]. They approach BCD as a high-stakes prediction task, producing pixel-level outputs.…”
Section: Introductionmentioning
confidence: 99%