2011 18th IEEE International Conference on Image Processing 2011
DOI: 10.1109/icip.2011.6115887
|View full text |Cite
|
Sign up to set email alerts
|

Direction-based stochastic matching for pedestrian recognition in non-overlapping cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…Lighting variations are addressed through color normalization [18], exemplar based approaches [20], or brightness transfer functions learned with [23,25] or without supervision [13,19,24,29]. Discriminative power is improved by saliency information [33,34] or learning features specific to body parts [6,18,[20][21][22][23]27], either in the image [35][36][37] or back-projected onto an articulated [38,39] or monolithic [40] 3D body model. All MTMC trackers employ optimization to maximize the coherence of observations for predicted identities.…”
Section: Related Workmentioning
confidence: 99%
“…Lighting variations are addressed through color normalization [18], exemplar based approaches [20], or brightness transfer functions learned with [23,25] or without supervision [13,19,24,29]. Discriminative power is improved by saliency information [33,34] or learning features specific to body parts [6,18,[20][21][22][23]27], either in the image [35][36][37] or back-projected onto an articulated [38,39] or monolithic [40] 3D body model. All MTMC trackers employ optimization to maximize the coherence of observations for predicted identities.…”
Section: Related Workmentioning
confidence: 99%
“…[6,255,260,348,358]. Many objects trackers, like [37] rely on extracting appearance information from the object pixels using hand-crafted features, where including appearance information via colour histogram [38,47,49,50,80,192,317,329,367] and texture descriptors [38,49,79,187,373,374] are the most popular representation for appearance modelling in MOT.…”
Section: B) Appearance Modelsmentioning
confidence: 99%
“…50 Comparison of Multi-Object Tracking performance of MOT.2 with top 3 online benchmarking algorithms over MOT17 dataset.…”
mentioning
confidence: 99%
“…[3] proposed an optimization framework by combining short term feature correspondences across the cameras with the long-term feature dependency models. [19] did not use the spatio-temporal cues in multi camera scenarios, but instead investigated directional angles using the spatio-temporal continuity in a single camera field. However, most of these works failed to handle a high clutter scene.…”
Section: Related Workmentioning
confidence: 99%