2020
DOI: 10.17815/cd.2020.30
|View full text |Cite
|
Sign up to set email alerts
|

Accurate pedestrian localization in overhead depth images via Height-Augmented HOG

Abstract: We tackle the challenge of reliably and automatically localizing pedestrians in real-life conditions through overhead depth imaging at unprecedented high-density conditions. Leveraging upon a combination of Histogram of Oriented Gradients-like feature descriptors, neural networks, data augmentation and custom data annotation strategies, this work contributes a robust and scalable machine learning-based localization algorithm, which delivers near-human localization performance in real-time, even with local pede… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1

Relationship

5
3

Authors

Journals

citations
Cited by 13 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…Large-volumes of experimental data, in the order of hundred of thousands real-life trajectories, are indeed essential in order to analyze quantitatively and systematically the physics of pedestrian motion, disentangling the high variations in individual behaviors from average patterns, and characterizing typical fluctuations and universal features [19,20]. This relative delay in performing high-statistics based analyses of pedestrian motion (especially in comparison with other "active matter" physical systems [21]), is most likely due to the complex technical challenge of achieving accurate, privacy-preserving, individual tracking in real-life conditions (see, e.g., [20,22,23], or [24] for approaches targeting even higher resolution). Market solutions, as the one considered in this paper, are also becoming accessible, offering various trade-offs between accuracy and costs (see, e.g., [25]).…”
Section: Related Work: (Social) Distance In Pedestrian Dynamicsmentioning
confidence: 99%
“…Large-volumes of experimental data, in the order of hundred of thousands real-life trajectories, are indeed essential in order to analyze quantitatively and systematically the physics of pedestrian motion, disentangling the high variations in individual behaviors from average patterns, and characterizing typical fluctuations and universal features [19,20]. This relative delay in performing high-statistics based analyses of pedestrian motion (especially in comparison with other "active matter" physical systems [21]), is most likely due to the complex technical challenge of achieving accurate, privacy-preserving, individual tracking in real-life conditions (see, e.g., [20,22,23], or [24] for approaches targeting even higher resolution). Market solutions, as the one considered in this paper, are also becoming accessible, offering various trade-offs between accuracy and costs (see, e.g., [25]).…”
Section: Related Work: (Social) Distance In Pedestrian Dynamicsmentioning
confidence: 99%
“…We acquired individual pedestrian trajectories at 30 Hz time resolution by means of overhead depth images and the HA-HOG localization method [21]. We collected raw depth images of a walkable area of about 30 m 2 via 8 Orbbec Persee sensors attached underneath a pedestrian overpass, and arranged in a 4x2 grid (see dashed gray line in Fig.…”
Section: Numerical Resultsmentioning
confidence: 99%
“…In the localization stage, each frame is processed independently to single out each pedestrian and estimate their position. To this purpose image processing and/or machine learning models are used [13,16,17,26]. Lagrangian time-tracking assigns an id to each detection on the basis of continuity arguments.…”
Section: Essentials Of 3d Optic-based Pedestrian Trackingmentioning
confidence: 99%
“…Localization occurs via depth clustering (as in [12]), and time-tracking uses the Trackpy Python library [28]. The same approach has been successfully used, e.g., in stations, streets, and museums [9,12,13,26]. The specific setup considered consists of a grid of 3 × 4 Microsoft Kinect TM [25] depth sensors.…”
Section: Tracking Technologies and Experimental Setupsmentioning
confidence: 99%