2023
DOI: 10.1109/tgrs.2023.3242776
|View full text |Cite
|
Sign up to set email alerts
|

A Positive Feedback Spatial-Spectral Correlation Network Based on Spectral Slice for Hyperspectral Image Classification

Abstract: The emergence of convolutional neural networks (CNNS) has greatly promoted the development of hyperspectral image classification (HSIC). However, some serious problems are the lack of label samples in hyperspectral images (HSIs) and the spectral characteristics of different objects in HSIs are sometimes similar among classes. These problems hinder the improvement of hyperspectral image classification performance. To this end, in this paper, a positive feedback spatial spectral correlation network based on spec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 60 publications
0
3
0
Order By: Relevance
“…Accordingly, it is worthwhile to design a unified network for the same target or sensor. Additionally, more attention is still needed to study the spatial and spectral correlation in HSIs, as motivated by the encouraging performance in various tasks, such as super-resolution mapping [47], super resolution [48] and image classification [49].…”
Section: Discussionmentioning
confidence: 99%
“…Accordingly, it is worthwhile to design a unified network for the same target or sensor. Additionally, more attention is still needed to study the spatial and spectral correlation in HSIs, as motivated by the encouraging performance in various tasks, such as super-resolution mapping [47], super resolution [48] and image classification [49].…”
Section: Discussionmentioning
confidence: 99%
“…However, the mixing operation with randomness is easy to weaken the representation ability (Guo, Mao, and Zhang 2019;Chou et al 2020). This is primarily due to the direct interpolation without considering the complementarity of two features and the neglect of specific feature channels (Hou, Liu, and Wang 2017;Shi, Wu, and Wang 2023;Luo, Xu, and Xu 2022;Zhu et al 2023), which in turn impacts the distribution of prediction results. As illustrated in Figure 1, given a novel sample of "Retriever" and another randomly picked out sample "Linnet", the manifold regularization methods, e.g., Mixup (Zhang et al 2018), CutMix (Yun et al 2019), and PatchMix (Liu et al 2021), interpolate their images and labels to train the classifier for predicting both categories.…”
Section: Introductionmentioning
confidence: 99%
“…This purposeful selection, in contrast to the random selection in manifold regularization, enables the classifier to better concentrate more effectively on the novel content during the training stage. Besides, we also exploit the feature complementarity from similar categories and the discriminability of specific feature channels, which can both provide distinctive patterns for classification (Liu et al 2019;Shi, Wu, and Wang 2023). Building on the above analysis, we propose two attention-based calculations at the instance and channel levels, respectively.…”
Section: Introductionmentioning
confidence: 99%
“…CNN often includes deep convolutional layers and fixed filter weights, which result in a constrained receptive field and a notable increase in computing costs. Furthermore, the majority of CNN-based classification techniques concentrate on obtaining abstract spatial-spectral feature representations; nevertheless, their capacity to extract deep semantic features is restricted [27][28][29].…”
Section: Introductionmentioning
confidence: 99%