2009 IEEE 12th International Conference on Computer Vision 2009
DOI: 10.1109/iccv.2009.5459205
|View full text |Cite
|
Sign up to set email alerts
|

Human detection using partial least squares analysis

Abstract: Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
235
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 411 publications
(241 citation statements)
references
References 21 publications
3
235
0
Order By: Relevance
“…To do so, we first train a person detector based on a PLS appearance model that works with two classes (positive and negative person samples) [19]. The training is achieved by cropping the samples into overlapping blocks and extracting low-level features from each of them.…”
Section: Person Re-identification Based On Partial Least Squaresmentioning
confidence: 99%
See 1 more Smart Citation
“…To do so, we first train a person detector based on a PLS appearance model that works with two classes (positive and negative person samples) [19]. The training is achieved by cropping the samples into overlapping blocks and extracting low-level features from each of them.…”
Section: Person Re-identification Based On Partial Least Squaresmentioning
confidence: 99%
“…A nonmaximum suppression is applied in the results to clean up redundant detection windows in multiple scales. A more detailed description can be found in [19]. The next step is to group detection windows from sequential frames of the same camera into tracklets.…”
Section: Person Re-identification Based On Partial Least Squaresmentioning
confidence: 99%
“…SVMs have been used with other descriptors for whole bodies [16] or body parts [19]. Schwartz et al [25] further incorporated texture information.…”
Section: Previous Workmentioning
confidence: 99%
“…or/and modalities (intensity, depth, motion, etc.) into a single pattern classification module [8], [33], [37], [38], [43], [46], [48]. One fusion approach involves integration of all cues into a single joint feature space [38], [43], [46].…”
Section: Previous Workmentioning
confidence: 99%