2023
DOI: 10.1109/tgrs.2023.3241257
|View full text |Cite
|
Sign up to set email alerts
|

Change Detection on Remote Sensing Images Using Dual-Branch Multilevel Intertemporal Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 88 publications
(41 citation statements)
references
References 51 publications
0
41
0
Order By: Relevance
“…To evaluate the effectiveness and efficiency of our MFAENet, we select several state-of-the-art (SOTA) models as competitors, including CNNs-based methods: FC-EF, 19 FC-SD, 19 FC-SC, 19 SNUNet, 49 TFI-GR, 45 Changer, 37 MFIN, 56 MFSFNet, 57 CICNet, 58 and two transformer-based methods: BIT 31 and ChangeFormer 33 . We reproduce these models using their publicly available code and default parameters to ensure fair comparisons.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate the effectiveness and efficiency of our MFAENet, we select several state-of-the-art (SOTA) models as competitors, including CNNs-based methods: FC-EF, 19 FC-SD, 19 FC-SC, 19 SNUNet, 49 TFI-GR, 45 Changer, 37 MFIN, 56 MFSFNet, 57 CICNet, 58 and two transformer-based methods: BIT 31 and ChangeFormer 33 . We reproduce these models using their publicly available code and default parameters to ensure fair comparisons.…”
Section: Methodsmentioning
confidence: 99%
“…However, none of the above methods establishes the long-range dependency in the features, which in turn affects the detection performance. Feng et al 50 unified self-attention and cross-attention in a single module, proposed a cross-temporal joint attention block. This block serves as a guide for the global feature distribution of each input.…”
Section: Feature Enhancement In Remote Sensing CDmentioning
confidence: 99%
“…Meanwhile, in order to enhance the network's ability to extract change features, many improvement methods have been proposed on how to fuse the dual-temporal multi-scale features obtained after the Siamese network, including feature exchange [33], differential enhancement [34], [35], and various types of attention mechanisms, such as self-attention [36], the combination of self-attention [9] and cross-attention [9], spatial attention [37], and the combination of spatial attention and channel attention [38], [39], and selective attention [40], etc. For example, ECFNet [35] adjusts the channels of all features to a uniform value after the difference operation of dual-temporal multi-scale features and then fuses them from bottom to top so that the features of different scales contribute the same to the final result during the fusion process.…”
Section: A Cnn-based CD Methodsmentioning
confidence: 99%
“…STANet [9] introduces a self-attention module for calculating positional relations between any two pixels in dual-temporal features and uses them to generate the change maps at different scales. DMINet [36] unifies self-attention and cross-attention in the JoinAtt module, which focuses the attention on the change region and suppresses the noise independent of the change. Zhao et al [41] proposed a three-branch network TSNet, which uses a Siamese network and a single-stream network to respectively extract the features of dual-temporal images and their spliced images and uses a dual-channel attention module to fuse the three types of features to extract change features.…”
Section: A Cnn-based CD Methodsmentioning
confidence: 99%
“…Recent advancements in deep learning have precipitated a shift towards network-based change detection methods. Leveraging the capabilities of neural networks, these methods are capable of extracting robust image features and performing spatiotemporal fusion across various feature extraction stages effectively [13,33]. Early deep learning-based algorithms perceived change detection as a dense prediction task, with the majority exploring different stages of bi-temporal feature fusion to generate the final change segmentation map.…”
Section: B Change Detectionmentioning
confidence: 99%