2023
DOI: 10.1109/access.2023.3255164
|View full text |Cite
|
Sign up to set email alerts
|

Hyperspectral Image Classification: An Analysis Employing CNN, LSTM, Transformer, and Attention Mechanism

Abstract: Hyperspectral images contain tens to hundreds of bands, implying a high spectral resolution. This high spectral resolution allows for obtaining a precise signature of structures and compounds that make up the captured scene. Among the types of processing that may be applied to Hyperspectral Images, classification using machine learning models stands out. The classification process is one of the most relevant steps for this type of image. It can extract information using spatial and spectral information and spa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 60 publications
0
1
0
Order By: Relevance
“…Therefore, the joint integration of spatial-spectral features within neighboring regions as the input can lead to learning more discriminating features than spatial-wise or spectral-wise classification [40][41][42][43]. As an end-to-end deep learning algorithm that can perform representation learning and classification tasks simultaneously, CNNs are designed to learn complex nonlinear distribution features in images by automatically extracting high-level spatial features, which are especially suitable for dealing with high-dimensional hyperspectral images with strong neighboring correlations [44,45]. In this aspect, 3D CNN [46], fully convolutional network [47], convolutional capsule network [48], and multiple deep learning or dual-channel frameworks such as ConvLSTM [49,50], 1D-2D CNN [51], 3D-2D CNN [52,53], convolutional autoencoder [54], graph convolutional network [55], CNN-local discriminant embedding [56], CNN-SAE [57], CNN-transformer learning [58], and joint attention network [59] have been introduced for spectralspatial hyperspectral classification and achieved state-ofthe-art performance.…”
Section: ⅰ Introductionmentioning
confidence: 99%
“…Therefore, the joint integration of spatial-spectral features within neighboring regions as the input can lead to learning more discriminating features than spatial-wise or spectral-wise classification [40][41][42][43]. As an end-to-end deep learning algorithm that can perform representation learning and classification tasks simultaneously, CNNs are designed to learn complex nonlinear distribution features in images by automatically extracting high-level spatial features, which are especially suitable for dealing with high-dimensional hyperspectral images with strong neighboring correlations [44,45]. In this aspect, 3D CNN [46], fully convolutional network [47], convolutional capsule network [48], and multiple deep learning or dual-channel frameworks such as ConvLSTM [49,50], 1D-2D CNN [51], 3D-2D CNN [52,53], convolutional autoencoder [54], graph convolutional network [55], CNN-local discriminant embedding [56], CNN-SAE [57], CNN-transformer learning [58], and joint attention network [59] have been introduced for spectralspatial hyperspectral classification and achieved state-ofthe-art performance.…”
Section: ⅰ Introductionmentioning
confidence: 99%
“…CNN pada dasarnya terdiri dari serangkaian hidden layer yang masing-masing melakukan pemrosesan konvolusi pada data input. Operasi konvolusi juga menambahkan weight atau filter ke data input yang menghasilkan output [30]. Transformer seperti One-Dimensional Convolutional Neural Network (1D-CNN) mulai mendapatkan kepopuleran pada penerapan Natural Language Processing (NLP) [31].…”
unclassified