2022
DOI: 10.1109/lgrs.2021.3111985
|View full text |Cite
|
Sign up to set email alerts
|

Spectral–Spatial Residual Graph Attention Network for Hyperspectral Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…GAT [36][37][38] is a variant of Graph Neural Network (GNN) and mainly consists of a graph attention layer (GAL). GAT iteratively updates the representation of each node by aggregating the representations of neighboring nodes using a multi-head attention network mechanism, thus enabling the adaptive assignment of weights to different neighboring nodes.…”
Section: Graph Attention Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…GAT [36][37][38] is a variant of Graph Neural Network (GNN) and mainly consists of a graph attention layer (GAL). GAT iteratively updates the representation of each node by aggregating the representations of neighboring nodes using a multi-head attention network mechanism, thus enabling the adaptive assignment of weights to different neighboring nodes.…”
Section: Graph Attention Networkmentioning
confidence: 99%
“…Sha et al [37] applied a graph attention mechanism network to the HSIC and represented the relationship between neighboring nodes adaptively, but did not fully use the deep spatial-spectral features of the images when constructing the graph structure. An HSIC model (S2RGANet) using a spatial-spectral residual graph attention mechanism was proposed by Xu et al [38]. This model effectively improves the classification accuracy by constructing a deep convolutional residual module while introducing a network of graph attention mechanisms to obtain more important spatial information, but the training time of this network model is long.…”
Section: Introductionmentioning
confidence: 99%
“…Model performance inevitably meets the bottleneck in complex scenes that need to be finely classified, Hong provides a general multimodal deep learning (MDL) framework which applies to pixel-wise classification tasks and spatial information models [54]. Xu [55] find that the conventional convolution kernels can not process rich spatial information effectively, so the spectral-spatial residual graph attention network is proposed including spectral residual and graph attention convolution modules. Hong proposed SpectralFormer network to enhance the representation of the sequence attributes of spectral signatures, which learn spectrally local sequence information from neighboring bands [56].…”
Section: Introductionmentioning
confidence: 99%
“…This has stimulated interest among researchers in the community of HSI classification. Representative DL methods include convolutional neural networks (CNNs) [19], recurrent neural networks (RNNs) [20], [21], graph convolutional networks (GCNs) [22], [23], and capsule networks (CapsNets) [24], among which CNNs have been widely applied to HSI classification. Hu et al [25] and Chen et al [26] first employed CNNs for HSI classification in the spectral, spatial, and spatial-spectral domains.…”
Section: Introductionmentioning
confidence: 99%