2023
DOI: 10.1109/tgrs.2023.3263563
|View full text |Cite
|
Sign up to set email alerts
|

CBW-MSSANet: A CNN Framework With Compact Band Weighting and Multiscale Spatial Attention for Hyperspectral Image Change Detection

Abstract: Change detection (CD), aims to detect the changing area of the same scene at different times, which is an important application of remote sensing images. As the key data source of CD, hyperspectral image (HSI) is widely used in CD technology because of its rich spectral-spatial information. However, how to mine the multi-level spatial information of dual-temporal hyperspectral images (HSIs) and focus on the features of the pixels to be classified individually remains a problem in the spatial attention mechanis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 56 publications
0
1
0
Order By: Relevance
“…In [49], cross-temporal attention was designed to explore the temporal change information between bi-temporal features. Ou et al [50] performed attention operations on image patches of different scales at the same time so that the central pixel to be detected in the fused feature map has a higher weight. The Transformer [37] is a network built on the multi-head self-attention (MHSA) mechanism to selectively attend to relevant information and disregard irrelevant input, allowing for it to model long-range dependencies without considering the actual distance.…”
Section: Introductionmentioning
confidence: 99%
“…In [49], cross-temporal attention was designed to explore the temporal change information between bi-temporal features. Ou et al [50] performed attention operations on image patches of different scales at the same time so that the central pixel to be detected in the fused feature map has a higher weight. The Transformer [37] is a network built on the multi-head self-attention (MHSA) mechanism to selectively attend to relevant information and disregard irrelevant input, allowing for it to model long-range dependencies without considering the actual distance.…”
Section: Introductionmentioning
confidence: 99%