2017
DOI: 10.1109/jstars.2017.2655516
|View full text |Cite
|
Sign up to set email alerts
|

R-VCANet: A New Deep-Learning-Based Hyperspectral Image Classification Method

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
69
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 162 publications
(69 citation statements)
references
References 35 publications
0
69
0
Order By: Relevance
“…Furthermore in [37], there is a new model named as RVCANet designed in the context of deep learning for classification of hyperspectral imaging. R-VCANet model is a combination of Rolling Guidance Filter and Vertex Component Analysis Network.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore in [37], there is a new model named as RVCANet designed in the context of deep learning for classification of hyperspectral imaging. R-VCANet model is a combination of Rolling Guidance Filter and Vertex Component Analysis Network.…”
Section: Discussionmentioning
confidence: 99%
“…R-VCANet is based on natural characteristics of HIS data, spectral properties, and spatial information. Hence the method proposed in [37] has performed better for hyperspectral image classification, especially when the sampling labels are limited.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, since we need sub-feature sets from spectral features, we must extend the pixels spectra to a group of features. Motivated by the effectiveness of RGF and its improvement in HSI classification [37], in this paper, we use RGF to obtain the sub-feature set using spectral information.…”
Section: Rgfmentioning
confidence: 99%
“…Li Wei et al, proposed hyperspectral image classification using deep pixel-pair features [1]. Bin Pan et al, proposed a kind of vertex component analysis network that achieved better performance than some state-of-the-art methods [40].…”
Section: Introductionmentioning
confidence: 99%