2009 International Conference on Artificial Intelligence and Computational Intelligence 2009
DOI: 10.1109/aici.2009.235
|View full text |Cite
|
Sign up to set email alerts
|

Research on Ensemble Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(59 citation statements)
references
References 5 publications
0
49
0
Order By: Relevance
“…Compared to the individual LHF products, the R 2 of the ML methods was 3.7–46.4% higher and the bias decreased by approximately 15 W m −2 . Our results also show that some minor differences existed among the four ML methods, which are mainly affected by the structure of different fusion algorithms [ 69 , 70 ]. Sagi and Rokach [ 71 ] showed that the differences in structure of the ensemble methods may significantly affect the predictions, and the best ensemble method for a given problem needs to consider other factors (such as suitability to a given setting).…”
Section: Discussionmentioning
confidence: 75%
“…Compared to the individual LHF products, the R 2 of the ML methods was 3.7–46.4% higher and the bias decreased by approximately 15 W m −2 . Our results also show that some minor differences existed among the four ML methods, which are mainly affected by the structure of different fusion algorithms [ 69 , 70 ]. Sagi and Rokach [ 71 ] showed that the differences in structure of the ensemble methods may significantly affect the predictions, and the best ensemble method for a given problem needs to consider other factors (such as suitability to a given setting).…”
Section: Discussionmentioning
confidence: 75%
“…Here we use a learner to combine the output from different learners, which leads to the decrease in either bias or variance error, depending on the combining learner we use. Compared with other commonly-used ensemble learning techniques, such as bagging and boosting, stacking can transfer the ensemble features to a simple model and does not require too many parameter tunings and feature selections [13,14]. In order to improve prediction precision while avoiding overfitting, a two-layer stacking method is applied to build the ensemble model, as illustrated in Figure 7, including a hidden layer and an output layer.…”
Section: Ensemble Model With Two-layer Stackingmentioning
confidence: 99%
“…In our research, the pharyngeal fricative speech detection task is performed by applying the bagging of the ensemble learning classifier. The idea of ensemble learning is principally based on the theoretical cornerstone that the generalization ability of an ensemble is usually much stronger than that of a single learner [32][33][34]. There are 30 learners in the training process of this classification model.…”
Section: Bagging Of Ensemble Learning Classifiermentioning
confidence: 99%
“…The classificationHe et al BioMed Eng OnLine (2020) 19:36 performance of the model depends on the stability of the base classifiers[32]. All the base classifiers are mutually independent.…”
mentioning
confidence: 99%