2009
DOI: 10.1109/lgrs.2009.2024624
|View full text |Cite
|
Sign up to set email alerts
|

Ensemble Classification Algorithm for Hyperspectral Remote Sensing Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2013
2013
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(6 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…The RF algorithm belongs to the ensemble learning model, which integrates multiple base classifiers to make joint decisions [24]. Ensemble learning has demonstrated superior performance in various fields [25]. The Rotation Forest (ROF) algorithm [26], an extension of the RF algorithm, is another integrated learning model.…”
Section: Introductionmentioning
confidence: 99%
“…The RF algorithm belongs to the ensemble learning model, which integrates multiple base classifiers to make joint decisions [24]. Ensemble learning has demonstrated superior performance in various fields [25]. The Rotation Forest (ROF) algorithm [26], an extension of the RF algorithm, is another integrated learning model.…”
Section: Introductionmentioning
confidence: 99%
“…In terms of classifier ensemble technology, two strategies, namely "multiple classifier systems" (Benediktsson, 2009) and "decision fusion" (Fauvel et al, 2006) are employed. Multiple classifier systems are based on the manipulation of training sample sets, including boosting (Freund et al, 2003) and bagging (Breiman, 1996).…”
Section: Introductionmentioning
confidence: 99%
“…However, most of these approaches rely on one regression model for the prediction, and are therefore subject to overfitting when the training data is limited (Pal, 2007). In the machine learning community, there is an increasing interest in combining several base learning models into one predictive model in order to improve the model generalization ability (Chi et al, 2009; Zhou, 2009; Zhang and Crawford, 2015). By combining multiple learners, the errors of a single model will likely be compensated by others, and thus help improve the robustness and accuracy of the prediction.…”
Section: Introductionmentioning
confidence: 99%