1995
DOI: 10.1007/3-540-59119-2_166
|View full text |Cite
|
Sign up to set email alerts
|

A desicion-theoretic generalization of on-line learning and an application to boosting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
904
1
27

Year Published

2009
2009
2019
2019

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 2,194 publications
(932 citation statements)
references
References 8 publications
0
904
1
27
Order By: Relevance
“…AdaBoost is a machine learning method that combines multiple weak classifiers, each of which only returns a true or false, so that an effective strong classifier is constructed [3].…”
Section: Adaboost Classifiersmentioning
confidence: 99%
See 1 more Smart Citation
“…AdaBoost is a machine learning method that combines multiple weak classifiers, each of which only returns a true or false, so that an effective strong classifier is constructed [3].…”
Section: Adaboost Classifiersmentioning
confidence: 99%
“…The image-based human detection generally consists of two stages; calculation of feature amount of given images and pattern classification based on machine learning. In this implementation, histograms of oriented gradients (HOG) [2] and AdaBoost classifiers [3] are used as feature amount and pattern classifiers, respectively. The HOG feature roughly describes object shape of local regions of given images and this is widely used for various object recognition such as pedestrian and car detection [4]- [6].…”
Section: Introductionmentioning
confidence: 99%
“…The AdaBoost algorithm was first introduced in (Freund & Schapire, 1997). The purpose of this algorithm is to combine several weak classifiers into a strong classifier.…”
Section: Feature Selection and Classifier Trainingmentioning
confidence: 99%
“…• The development of many powerful, mainly discriminative methodologies such as Boosting [31], Support Vector machines [32] and their recent structural extension [33] and (deep) neural network architectures [34] which can harness the amount of information provided by the above mentioned modern large scale datasets and facilitate training of complex deformable object and facial models [35].…”
Section: Introductionmentioning
confidence: 99%