2017
DOI: 10.1080/22797254.2017.1274566
|View full text |Cite
|
Sign up to set email alerts
|

Spectral-Spatial Classification of Hyperspectral Imagery Based on Stacked Sparse Autoencoder and Random Forest

Abstract: It is of great interest in exploiting spectral-spatial information for hyperspectral image (HSI) classification at different spatial resolutions. This paper proposes a new spectral-spatial deep learning-based classification paradigm. First, pixel-based scale transformation and class separability criteria are employed to measure appropriate spatial resolution HSI, and then we integrate the spectral and spatial information (i.e., both implicit and explicit features) together to construct a joint spectral-spatial… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
37
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 69 publications
(38 citation statements)
references
References 43 publications
(45 reference statements)
0
37
0
1
Order By: Relevance
“…In our proposed MSSN model, 7 × 7 × 200, 11 × 11 × 200, and 15 × 15 × 200 hyperspectral neighborhood blocks are used as inputs to extract multiple scale features from HSIs. Taking the 7 × 7 × 200 neighborhood block as an example, in the first convolutional layer, we use 24 3D convolution kernels of size 1 × 1 × 20 to convolve the input neighborhood block under the (1,1,20) step and get 24 feature cubes of size 7 × 7 × 10. Because HSIs are rich in spectral features, we use 24 3D convolution kernels of size 1 × 1 × 20, which allows the convolution kernel to focus more on spectral features and quickly reduce dimensions.…”
Section: Mssn For Hsi Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…In our proposed MSSN model, 7 × 7 × 200, 11 × 11 × 200, and 15 × 15 × 200 hyperspectral neighborhood blocks are used as inputs to extract multiple scale features from HSIs. Taking the 7 × 7 × 200 neighborhood block as an example, in the first convolutional layer, we use 24 3D convolution kernels of size 1 × 1 × 20 to convolve the input neighborhood block under the (1,1,20) step and get 24 feature cubes of size 7 × 7 × 10. Because HSIs are rich in spectral features, we use 24 3D convolution kernels of size 1 × 1 × 20, which allows the convolution kernel to focus more on spectral features and quickly reduce dimensions.…”
Section: Mssn For Hsi Classificationmentioning
confidence: 99%
“…Traditional HSI classification methods, like support vector machine (SVM) [16,17] and random forest [18], have focused on extracting spectral features of HSIs while ignoring spatial features. As a typical deep learning model, the stacked autoencoder (SAE) [19,20] can extract both spatial and spectral information and then fuse them for HSI classification. Deep belief networks (DBN) [21] and restricted Boltzmann machines [22] have been proposed for combining spatial information and spectral information of HSIs.…”
Section: Introductionmentioning
confidence: 99%
“…Soybeans -clean 593 13 Wheat 205 14 Woods 1265 15 Building-Grass-Trees-Drives 386 16 Stone-steel Towers 93 Total 10,249…”
Section: Datasetsmentioning
confidence: 99%
“…Experimental results demonstrate the effectiveness of our proposed network.Remote Sens. 2019, 11, 884 2 of 16 widely used, and HSI classification performance has gradually improved from the use of only spectral features to the joint use of spectral-spatial features [8][9][10][11].To extract spectral-spatial features, deep learning models have been introduced for the purpose of HSI classification [12][13][14][15][16][17][18][19]. The main idea of deep learning is to extract more abstract features from raw data, by means of multi-layer superimposed representation [20][21][22].…”
mentioning
confidence: 99%
See 1 more Smart Citation