2004
DOI: 10.1007/978-3-540-24671-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Weak Hypotheses and Boosting for Generic Object Detection and Recognition

Abstract: In this paper we describe the first stage of a new learning system for object detection and recognition. For our system we propose Boosting [5] as the underlying learning technique. This allows the use of very diverse sets of visual features in the learning process within a common framework: Boosting -together with a weak hypotheses findermay choose very inhomogeneous features as most relevant for combination into a final hypothesis. As another advantage the weak hypotheses finder may search the weak hypothese… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
241
1
1

Year Published

2005
2005
2017
2017

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 197 publications
(246 citation statements)
references
References 24 publications
(59 reference statements)
3
241
1
1
Order By: Relevance
“…We compared our result to the state of the art results from [6] and [12]. Table 1 summarizes the recognition accuracy at the equal ROC points (point at which the true positive rate equals one minus the false positive rate) of our different approach: no part selection with PCA, part selection with PCA, part selection with 2D PCA and results from other recent methods.…”
Section: Resultsmentioning
confidence: 99%
“…We compared our result to the state of the art results from [6] and [12]. Table 1 summarizes the recognition accuracy at the equal ROC points (point at which the true positive rate equals one minus the false positive rate) of our different approach: no part selection with PCA, part selection with PCA, part selection with 2D PCA and results from other recent methods.…”
Section: Resultsmentioning
confidence: 99%
“…In contrast to this, discriminant models directly learn the mapping from x to t c based on a decision function Φ(x) or estimate the posterior class probability P (t c |x) in a single step (Ng & Jordan, 2001). Common approaches for this group of categorization models are based on support vector machines (Heisele et al, 2001), boosting (Viola & Jones, 2001;Opelt et al, 2004) or SNOW (Agarwal et al, 2004). Such discriminant models tend to achieve a better categorization performance compared to generative models if a large ensemble of training examples is available (Ng & Jordan, 2001).…”
Section: Visual Category Learning Approachesmentioning
confidence: 99%
“…Note that the amount of supervision varies over the methods where e.g. [26] use labels and bounding boxes (as we do); [2,3,12,22] use just the object labels; and Sivic et al [25] use no supervision. It should be pointed out, that we use just 50 training images and 50 validation images for each category, which is less than the other approaches use.…”
Section: Caltech Datasetsmentioning
confidence: 99%
“…The methods differ on the details of the codebook, but more fundamentally they differ in how strictly the geometry of the configuration of parts constituting an object class is constrained. For example, Csurka et al [10], Bar-Hillel et al [3] and Opelt et al [22] simply use a "bag of visual words" model (with no geometrical relations between the parts at all), Agarwal & Roth [1], Amores et al [2], and Vidal-Naquet and Ullman [27] use quite loose pairwise relations, whilst Fergus et al [12] have a strongly parametrized geometric model consisting of a joint Gaussian over the centroid position of all the parts. The approaches using no geometric relations are able to categorize images (as containing the object class), but generally do not provide location information (no detection).…”
Section: Introduction and Objectivementioning
confidence: 99%