2022
DOI: 10.1109/tgrs.2022.3185640
|View full text |Cite
|
Sign up to set email alerts
|

BS2T: Bottleneck Spatial–Spectral Transformer for Hyperspectral Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(17 citation statements)
references
References 50 publications
0
14
0
Order By: Relevance
“…Comparison Methods: To verify the effectiveness of the proposed S2Former, seven state-of-the-art HSI classifiers are used for comparison, including SVM [38], CDCNN [22], FDSSC [27], DBDA [30], SpectralFormer [34], HSI-Mixer [36], and BS2T [35].…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…Comparison Methods: To verify the effectiveness of the proposed S2Former, seven state-of-the-art HSI classifiers are used for comparison, including SVM [38], CDCNN [22], FDSSC [27], DBDA [30], SpectralFormer [34], HSI-Mixer [36], and BS2T [35].…”
Section: Experimental Settingsmentioning
confidence: 99%
“…A new backbone network named SpectralFormer [34] combines the advantages of transformer and CNN, aiming to learn local spectral feature representation and feature transfer between shallow and deep layers. Song et al [35] designed a Bottleneck Spatial-Spectral Transformer (BS2T) to describe the dependencies between HSI pixels over long-term locations and bands. HSI-Mixer [36] uses a simple CNN architecture to simulate the function of transformer, reconsiders the significant inductive bias of convolution.…”
Section: Introductionmentioning
confidence: 99%
“…For example, Sun et al 17 proposed Spectral-Spatial Attention Network (SSAN), in which attention was used to search the effective features of HSI cubes. Song's and Wang's researches 18,19 also revealed that spectral-spatial methods can better extract the information in HSI.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, Transformer-based methods were proposed to learn long-distance information, which achieves good experimental performance in many computer vision fields, involving semantic segmentation [27][28][29][30], image classification [31][32][33][34], object detection [35][36][37][38], and super-resolution [39][40][41][42]. In ref.…”
Section: Introductionmentioning
confidence: 99%