2023
DOI: 10.3390/rs15174235
|View full text |Cite
|
Sign up to set email alerts
|

Multiscale Pixel-Level and Superpixel-Level Method for Hyperspectral Image Classification: Adaptive Attention and Parallel Multi-Hop Graph Convolution

Junru Yin,
Xuan Liu,
Ruixia Hou
et al.

Abstract: Convolutional neural networks (CNNs) and graph convolutional networks (GCNs) have led to promising advancements in hyperspectral image (HSI) classification; however, traditional CNNs with fixed square convolution kernels are insufficiently flexible to handle irregular structures. Similarly, GCNs that employ superpixel nodes instead of pixel nodes may overlook pixel-level features; both networks tend to extract features locally and cause loss of multilayer contextual semantic information during feature extracti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 57 publications
0
1
0
Order By: Relevance
“…However, the computation of a pixel-level graph is usually prohibitive for large-scale HSI data due to its high computational space requirements and time complexity. To alleviate the drawback, many studies adopt a superpixel-level graph [23][24][25], which can greatly reduce the number of nodes. Nonetheless, such a process is limited not only by the absence of superpixel-level labels but also by the reliability of superpixel segmentation.…”
Section: Introductionmentioning
confidence: 99%
“…However, the computation of a pixel-level graph is usually prohibitive for large-scale HSI data due to its high computational space requirements and time complexity. To alleviate the drawback, many studies adopt a superpixel-level graph [23][24][25], which can greatly reduce the number of nodes. Nonetheless, such a process is limited not only by the absence of superpixel-level labels but also by the reliability of superpixel segmentation.…”
Section: Introductionmentioning
confidence: 99%