2020
DOI: 10.3390/rs12122035
|View full text |Cite
|
Sign up to set email alerts
|

Residual Group Channel and Space Attention Network for Hyperspectral Image Classification

Abstract: Recently, deep learning methods based on three-dimensional (3-D) convolution have been widely used in the hyperspectral image (HSI) classification tasks and shown good classification performance. However, affected by the irregular distribution of various classes in HSI datasets, most previous 3-D convolutional neural network (CNN)-based models require more training samples to obtain better classification accuracies. In addition, as the network deepens, which leads to the spatial resolution of feature maps grad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 29 publications
(16 citation statements)
references
References 42 publications
0
12
0
Order By: Relevance
“…In visual perception, it pays attention mainly to the features of interest, and suppresses redundant information. The attention mechanism can be integrated into the CNN framework with negligible overhead, and trained together with CNN [31]. Inspired by the convolutional block attention module (CBAM) [32] which allows auto-learning of pixel correlation among different feature maps, we add the attention mechanism after feature fusion layer.…”
Section: Feature Fusion Layer Based On Attention Mechanismmentioning
confidence: 99%
“…In visual perception, it pays attention mainly to the features of interest, and suppresses redundant information. The attention mechanism can be integrated into the CNN framework with negligible overhead, and trained together with CNN [31]. Inspired by the convolutional block attention module (CBAM) [32] which allows auto-learning of pixel correlation among different feature maps, we add the attention mechanism after feature fusion layer.…”
Section: Feature Fusion Layer Based On Attention Mechanismmentioning
confidence: 99%
“…The Fig. 1: Block diagram of the proposed method authors use long short-term memory (LSTM) model, and a multi-scale convolutional neural network to extract spectral and spatial features, respectively, showing an improvement of overall accuracy in the case of multiple HSI datasets.For attention transfer based approaches mainly inspired from the unsupervised image saliency detection with Gestalt-laws guided optimization and attention in [31], a three dimensional convolutional neural network (3-DCNN) based residual channel and space attention network (RGSCA) is used for HSI classification [32]. It uses residual connection in both bottomup and top-down manner to optimize the attentions of the channel and spatial wise features in the training process.…”
Section: The Relational-knowledge Based Damentioning
confidence: 99%
“…The models proposed in [33] and [35] converge at 50 and 100 epochs, respectively. To solve this problem, quite a few algorithms extract the spatial and spectral features separately and introduce the attention mechanism for HSI classification [36][37][38][39][40][41]. For example, Zhu et al [36] propose an end-to-end residual spectral-spatial attention network (RSSAN), which can adaptively realize the selection of spatial information and spectrum information.…”
Section: Introductionmentioning
confidence: 99%