2019
DOI: 10.3390/rs11151794
|View full text |Cite
|
Sign up to set email alerts
|

Alternately Updated Spectral–Spatial Convolution Network for the Classification of Hyperspectral Images

Abstract: The connection structure in the convolutional layers of most deep learning-based algorithms used for the classification of hyperspectral images (HSIs) has typically been in the forward direction. In this study, an end-to-end alternately updated spectral–spatial convolutional network (AUSSC) with a recurrent feedback structure is used to learn refined spectral and spatial features for HSI classification. The proposed AUSSC includes alternating updated blocks in which each layer serves as both an input and an ou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(5 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…When calculating the loss, the primary loss and auxiliary loss are used to aggregate global and local features, resulting in more accurate lunar crater detection than networks like UNet, HRNet, and others [19]. Wang et al [20] proposed the center loss function as an auxiliary objective function to test its effectiveness as an auxiliary loss function that can improve hyperspectral image classification results. The problem of gradient disappearance in the network structure can be solved by auxiliary loss.…”
Section: A Auxiliary Loss In Neural Networkmentioning
confidence: 99%
“…When calculating the loss, the primary loss and auxiliary loss are used to aggregate global and local features, resulting in more accurate lunar crater detection than networks like UNet, HRNet, and others [19]. Wang et al [20] proposed the center loss function as an auxiliary objective function to test its effectiveness as an auxiliary loss function that can improve hyperspectral image classification results. The problem of gradient disappearance in the network structure can be solved by auxiliary loss.…”
Section: A Auxiliary Loss In Neural Networkmentioning
confidence: 99%
“…To reduce the model complexity of the 3D CNN, Roy et al [22] proposed a hybrid model consisting of 2D CNN and 3D CNN. In addition, Wang et al [23] decomposed 3D convolution kernel into three small 1D convolution kernels to reduce the number of parameters, preventing the 3D CNN from suffering the overfitting problem.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, deep learning methods with automatic extraction of deep discriminative features and end-to-end learn-ing characteristics have been gradually applied in hyperspectral image classification [1][2][3]. These methods include Stacked Autoencoder Network (SAN) [4,5], Deep Belief Networks (DBN) [6], Recurrent Neural Network (RNN) [7], and Convolutional Neural Network (CNN) [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27]. With their powerful feature extraction capability, CNNs are most widely used in hyperspectral image classification, which mainly includes the classification methods based on spectral features [8][9][10], the classification methods based on spatial feature [11][12][13], and the classification methods based on spatial-spectral features [14][15][16][17][18].…”
Section: Introductionmentioning
confidence: 99%
“…The classification method based on spatial‐spectral features can make comprehensive use of the spatial feature and spectral feature information in hyperspectral images, which can greatly improve the accuracy and effect of HSI. According to the different methods of feature combination, the classification methods based on spatial‐spectral features mainly include three ways: (1) extracting spectral and spatial features, respectively, and then perform feature fusion for final classification [19–21]; (2) extracting spatial and spectral features from the network model to complete the classification [22–24]; (3) classifying by the spectral features first, then post‐processing the classification results [18, 25–27] . Nevertheless, hyperspectral image classification methods based on CNNs still have the following drawbacks.…”
Section: Introductionmentioning
confidence: 99%