2016
DOI: 10.3390/rs8090749
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Tri-Training Technique for Semi-Supervised Classification of Hyperspectral Images Based on Diversity Measurement

Abstract: This paper introduces a novel semi-supervised tri-training classification algorithm based on diversity measurement for hyperspectral imagery. In this algorithm, three measures of diversity, i.e., double-fault measure, disagreement metric and correlation coefficient, are applied to select the optimal classifier combination from different classifiers, e.g., support vector machine (SVM), multinomial logistic regression (MLR), extreme learning machine (ELM) and k-nearest neighbor (KNN). Then, unlabeled samples are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 19 publications
(21 citation statements)
references
References 42 publications
0
21
0
Order By: Relevance
“…Compared with supervised classification methods, the advantage of semi-unsupervised methods for HSIs can be summarized as follows. First, the supervised classification needs a great deal of labeled samples to improve the classifier performance [47]. However HSI classification often faces the issue of limited number of labeled data, which are often costly, effortful, and time-consuming [28].…”
Section: Discussionmentioning
confidence: 99%
“…Compared with supervised classification methods, the advantage of semi-unsupervised methods for HSIs can be summarized as follows. First, the supervised classification needs a great deal of labeled samples to improve the classifier performance [47]. However HSI classification often faces the issue of limited number of labeled data, which are often costly, effortful, and time-consuming [28].…”
Section: Discussionmentioning
confidence: 99%
“…The spatial information based on the segmentation objects in each scale is then used to constrain the first candidate and construct the candidate set S u . Finally, the final additional unlabeled samples S are chosen from the candidate set S u with an AL method based on the breaking ties (BT) algorithm [50], for the training samples of the ensemble system in the next scale layer. The BT algorithm is used to measure the information of the samples by comparing the difference between the maximum probability and sub-probability of posterior probability of samples.…”
Section: Unlabeled Sample Selectionmentioning
confidence: 99%
“…In [47], the optimal classifier combination selected by the diversity measures was multinomial logistic regression (MLR), k-nearest neighbor (KNN), and extreme learning machine (ELM). In this study, the correlation coefficient, disagreement metric, and double-fault measure were implemented to select the optimal classifier combination.…”
Section: Cooperative Training Strategy Combining Local Featuresmentioning
confidence: 99%