2019
DOI: 10.1109/access.2019.2947359
|View full text |Cite
|
Sign up to set email alerts
|

Feature Learning Viewpoint of Adaboost and a New Algorithm

Abstract: The AdaBoost algorithm has the superiority of resisting overfitting. Understanding the mysteries of this phenomena is a very fascinating fundamental theoretical problem. Many studies are devoted to explaining it from statistical view and margin theory. In this paper, we illustrate it from feature learning viewpoint, and propose the AdaBoost+SVM algorithm, which can explain the resistant to overfitting of AdaBoost directly and easily to understand. Firstly, we adopt the AdaBoost algorithm to learn the base clas… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 56 publications
(42 citation statements)
references
References 27 publications
(34 reference statements)
0
39
0
3
Order By: Relevance
“…In this paper, we will choose the two most important features according to the importance of the features to build a two-dimensional feature space. By visualizing decision boundaries, we can intuitively see the distribution of different patterns in the feature space and the classification effect of different classifiers such as SVM [29], GBDT [30], RF [31] and Adaboost [32].…”
Section: The Classifier Performancementioning
confidence: 99%
“…In this paper, we will choose the two most important features according to the importance of the features to build a two-dimensional feature space. By visualizing decision boundaries, we can intuitively see the distribution of different patterns in the feature space and the classification effect of different classifiers such as SVM [29], GBDT [30], RF [31] and Adaboost [32].…”
Section: The Classifier Performancementioning
confidence: 99%
“…Due to the relatively small size of the considered Arabic/English speech emotion database (320 utterances), several considerations were made to assure generalization of the created classification models. AdaBoost [65] was implemented alongside the linear SVM classifier in order to avoid overfitting [66]. Moreover, leave-one-out cross validation was used where the number of folds is the same as the number of database instances such that each instance in the database is classified only once using all the other instances for training.…”
Section: Speech Emotion Recognition 1) Methodsmentioning
confidence: 99%
“…In essence, ensemble learning methods are meta-algorithms incorporating many methods of machine learning into one predictive model to improve performance. We selected three ensemble methods based on literature performance on assisting with pandemic predictions [10,11,12]. These are AdaBoost, Bagging and Extra-Trees classifiers.…”
Section: Ensemble Methodsmentioning
confidence: 99%