2020
DOI: 10.1109/access.2020.3027776
|View full text |Cite
|
Sign up to set email alerts
|

Rotation Equivariant Convolutional Neural Networks for Hyperspectral Image Classification

Abstract: Detection of surface material based on hyperspectral imaging (HSI) analysis is an important and challenging task in remote sensing. It is widely known that spectral-spatial data exploitation performs better than traditional spectral pixel-wise procedures. Nowadays, convolutional neural networks (CNNs) have shown to be a powerful deep learning (DL) technique due their strong feature extraction ability. CNNs not only combine spectral-spatial information in a natural way, but have also shown to be able to learn t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 118 publications
0
21
0
Order By: Relevance
“…The discriminative power of the extracted features can be further improved by combining both the max and min convolutional features before the ReLU non-linearity reported in [191] for the classification task. CNN's are failed to exploit rotation equivariance in a natural way [192] introduced the translation equivariant representations of input features which provides extra robustness to the spatial feature locations for HSIC.…”
Section: Spectral-spatial Cnn Framework For Hsicmentioning
confidence: 99%
“…The discriminative power of the extracted features can be further improved by combining both the max and min convolutional features before the ReLU non-linearity reported in [191] for the classification task. CNN's are failed to exploit rotation equivariance in a natural way [192] introduced the translation equivariant representations of input features which provides extra robustness to the spatial feature locations for HSIC.…”
Section: Spectral-spatial Cnn Framework For Hsicmentioning
confidence: 99%
“…the approximation is tight. Totally analogous to (18) and (19), we can also modify this approximation to make it more smooth: (20). For the old approximation we chose ν = 44, as suggested in [48], and for the new one ν = 1.6.…”
Section: Sub-riemannian Approximationmentioning
confidence: 99%
“…w 1 = w 3 = 1 and w 2 = 8. We see the exact distance d alongside the old sub-Riemannian approximation ρ b,sr,old(19) and new approximation ρ b,sr…”
mentioning
confidence: 99%
“…As stated in [ 195 , 198 ], PCA is exploited at its best for feature extraction, selection, and reduction to achieve higher accuracy and performance quality. PCA is one of the best preprocessing methods considered to date for improvised spectral dimension reduction [ 180 ], proper selection of spectral bands and their multiscale features in a segmented format [ 181 , 199 ], noise-reduced spectral analysis [ 27 ], and feature extraction [ 130 , 196 ]. PCA, in collaboration with SVM [ 195 , 200 ], DL for feature reduction and better classification [ 182 , 183 ], CNN with multiscale feature extraction [ 188 , 189 ], and sparse tensor technology [ 190 ], has highly been appreciated as soulful research.…”
Section: Discussionmentioning
confidence: 99%