2020
DOI: 10.1109/lgrs.2019.2962768
|View full text |Cite
|
Sign up to set email alerts
|

Multitask Deep Learning With Spectral Knowledge for Hyperspectral Image Classification

Abstract: In this letter, we propose a multitask deep learning method for classification of multiple hyperspectral data in a single training. Deep learning models have achieved promising results on hyperspectral image classification, but their performance highly rely on sufficient labeled samples, which are scarce on hyperspectral images. However, samples from multiple data sets might be sufficient to train one deep learning model, thereby improving its performance. To do so, we trained an identical feature extractor fo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
31
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
9

Relationship

2
7

Authors

Journals

citations
Cited by 36 publications
(37 citation statements)
references
References 25 publications
0
31
0
Order By: Relevance
“…The tail number here is not the global optimum and will be discussed in section IV-E. All results are reported from the average of ten random training sets. We compare our method with random forest, support vector machine, deep contextual CNN (DCCNN) [29], wide contextual residual network (WCRN) [73], the modified HResNet (the baseline of the proposed method) [43], and the state-of-the-art open-set method CROSR [52]. For open-set CNNs with confidence thresholding, the unknown probability is determined with a threshold of 0.5.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The tail number here is not the global optimum and will be discussed in section IV-E. All results are reported from the average of ten random training sets. We compare our method with random forest, support vector machine, deep contextual CNN (DCCNN) [29], wide contextual residual network (WCRN) [73], the modified HResNet (the baseline of the proposed method) [43], and the state-of-the-art open-set method CROSR [52]. For open-set CNNs with confidence thresholding, the unknown probability is determined with a threshold of 0.5.…”
Section: Methodsmentioning
confidence: 99%
“…In addition, on the right side of the image, three groups of buildings (building-1, building-2, building-3) are not annotated in the original reference map. These buildings are often misclassified as natural land covers such as bare soil [35], [40]- [42] and meadows [43], or other unrelated materials such as asphalt [44], [45].…”
Section: Introductionmentioning
confidence: 99%
“…CNN is widely used in HSI feature extraction tasks among these deep learning structures due to local perception, weight sharing, and other features. In literature [6], two-dimensional CNN is used as the basic module. Combined with multi-task learning strategies, two data sets are input for one model training, so that the network itself has more diverse feature recognition capabilities.…”
Section: Introductionmentioning
confidence: 99%
“…In this study, we are going to use multitemporal SAR data from a whole year to produce crop maps, with both randomsampling training set and regional-sampling training set. We select three advanced CNNs, namely the wide contextual residual network (WCRN) [42], the HResNet [43], and the Double-Branch Multi-Attention Mechanism (DBMA) network [44], and random forest [45] to test their performance. Among the three deep learning models, the WCRN can run on CPU, and the DBMA has the best performance in benchmark datasets (b) Regional sampling.…”
Section: Introductionmentioning
confidence: 99%