2007 IEEE 11th International Conference on Computer Vision 2007
DOI: 10.1109/iccv.2007.4409043
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Cascades for Face Detection

Abstract: In this paper, we propose a novel method, called "Dynamic Cascade", for training an efficient face detector on massive data sets. There are three key contributions. The first is a new cascade algorithm called "Dynamic Cascade", which can train cascade classifiers on massive data sets and only requires a small number of training parameters. The second is the introduction of a new kind of weak classifier, called "Bayesian Stump", for training boost classifiers. It produces more stable boost classifiers with fewe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
62
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 77 publications
(64 citation statements)
references
References 14 publications
2
62
0
Order By: Relevance
“…Since a reasonable goal make not be known a priori, the algorithm adjusts its cost function depending on the attainability of the goal based on cost prediction. In [68], a dynamic cascade was proposed, which assumes that the false negative rate of the nodes changes exponentially in each stage, following the idea in [106]. The approach is simple and ad hoc, though it appears to work reasonably well.…”
Section: Variations Of the Boosting Learning Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…Since a reasonable goal make not be known a priori, the algorithm adjusts its cost function depending on the attainability of the goal based on cost prediction. In [68], a dynamic cascade was proposed, which assumes that the false negative rate of the nodes changes exponentially in each stage, following the idea in [106]. The approach is simple and ad hoc, though it appears to work reasonably well.…”
Section: Variations Of the Boosting Learning Algorithmmentioning
confidence: 99%
“…To deal with that it was suggested to use fine granularity in the first few layers of the cascade, and coarse granularity in latter layers. Another interesting recent method was proposed in [68], where a new weak classifier called Bayesian stump was introduced. Bayesian stump is also a histogram based weak classifier, however, the split thresholds of the Bayesian stump are derived from iterative split and merge operations instead of being at equal distances and fixed.…”
Section: Feature Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…Following the method in [12], we can easily build a k-bins Bayesian Stump. Moreover, we can extend it to a Look Up Table(LUT) weak classifier for RealBoost algorithms by using log-likelihood output to replace the binary output in every interval.…”
Section: Bayesian Stump Look Up Table Weak Classifiermentioning
confidence: 99%
“…Xiao et al [12] proposed a method called Bayesian Stump to find P (ω c , x), c ∈ {1, 2} by using histogram to estimate the probability distribution. We divide all features' output value {μ(θ i )} into k sections δ k = (r k−1 , r k ], and the histogram of P (ω c , x) is…”
Section: Bayesian Stump Look Up Table Weak Classifiermentioning
confidence: 99%