2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967852
|View full text |Cite
|
Sign up to set email alerts
|

Geometric and Physical Constraints for Drone-Based Head Plane Crowd Density Estimation

Abstract: State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density in the image plane. While useful for this purpose, this imageplane density has no immediate physical meaning because it is subject to perspective distortion. This is a concern in sequences acquired by drones because the viewpoint changes often. This distortion is usually handled implicitly by either learning scaleinvariant features or estimating density in patches of different sizes, neither of which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
57
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 57 publications
(57 citation statements)
references
References 24 publications
(57 reference statements)
0
57
0
Order By: Relevance
“…We have presented a complete pipeline for removing perspective distortion from an image, and obtaining the bird's eye view from a monocular image automatically. Our method can be used as plug and play to help other networks which suffer from multiple-scales due to perspective distortion such as vehicle tracking [28], crowd counting [24,25] or penguin counting [4] etc. Our method is fast, robust and can be used in real-time on videos to generate a bird's eye view of the scene.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We have presented a complete pipeline for removing perspective distortion from an image, and obtaining the bird's eye view from a monocular image automatically. Our method can be used as plug and play to help other networks which suffer from multiple-scales due to perspective distortion such as vehicle tracking [28], crowd counting [24,25] or penguin counting [4] etc. Our method is fast, robust and can be used in real-time on videos to generate a bird's eye view of the scene.…”
Section: Resultsmentioning
confidence: 99%
“…It can be used as a preprocessing step for many other computer vision tasks like object detection [19,29] and tracking [10], and has applications in video surveillance and traffic control. For example, in crowd counting, where perspective distortion affects the crowd density in the image, the crowd density can instead be predicted in the world [24]. * The author is now at Latent Logic, Oxford Figure 1: An overview of our method for obtaining the bird's eye view of a scene from a single perspective image.…”
Section: Introductionmentioning
confidence: 99%
“…Crowd Comparison Application: We used an implementation [31] of the multi-column convolutional neural network (MCNN) architecture trained using part A of the ShanghaiTechDataset [40]. The subtasks consisted of 4 sets of crowd images that looked distinct and were sourced from various datasets: (1) UCF-QNRF, consisting of images of extremely large crowds scraped from the internet [14], (2) Venice, consisting of images from cameras around a plaza in Venice, Italy [22,23], (3) Shanghai A-test, a subset of the ShanghaiTech-Dataset consisting of images of crowds scraped from the internet (which differs significantly from UCF-QNRF in crowd density), and (4) Shanghai B, a subset of the ShanghaiTechDataset consisting of images from cameras around Shanghai streets. Although MCNN outputs a predicted count of people in an image, our subtasks are binary classification problems asking which of two images has more people.…”
Section: Modelsmentioning
confidence: 99%
“…The negative sampling and data-driven approach is missing in all the listed methods. Liu et al [82] proposed a geometric-aware crowd-density-estimation technique. An explicit model was proposed to deal with perspective distortion effects.…”
Section: Scale-cnn-cc Techniquesmentioning
confidence: 99%