Abstract-The ability to act in a socially-aware way is a key skill for robots that share a space with humans. In this paper we address the problem of socially-aware navigation among people that meets objective criteria such as travel time or path length as well as subjective criteria such as social comfort. Opposed to modelbased approaches typically taken in related work, we pose the problem as an unsupervised learning problem. We learn a set of dynamic motion prototypes from observations of relative motion behavior of humans found in publicly available surveillance data sets. The learned motion prototypes are then used to compute dynamic cost maps for path planning using an any-angle A* algorithm. In the evaluation we demonstrate that the learned behaviors are better in reproducing human relative motion in both criteria than a Proxemics-based baseline method.
Abstract-We present an approach to laser-based people tracking using a multi-hypothesis tracker that detects and tracks legs separately with Kalman filters, constant velocity motion models, and a multi-hypothesis data association strategy. People are defined as high-level tracks consisting of two legs that are found with little model knowledge. We extend the data association so that it explicitly handles track occlusions in addition to detections and deletions. Additionally, we adapt the corresponding probabilities in a situation-dependent fashion so as to reflect the fact that legs frequently occlude each other. Experimental results carried out with a mobile robot illustrate that our approach can robustly and efficiently track multiple people even in situations of high levels of occlusion.
Abstract-People detection and tracking is a key component for robots and autonomous vehicles in human environments. While prior work mainly employed image or 2D range data for this task, in this paper, we address the problem using 3D range data. In our approach, a top-down classifier selects hypotheses from a bottom-up detector, both based on sets of boosted features. The bottom-up detector learns a layered person model from a bank of specialized classifiers for different height levels of people that collectively vote into a continuous space. Modes in this space represent detection candidates that each postulate a segmentation hypothesis of the data. In the top-down step, the candidates are classified using features that are computed in voxels of a boosted volume tessellation. We learn the optimal volume tessellation as it enables the method to stably deal with sparsely sampled and articulated objects. We then combine the detector with tracking in 3D for which we take a multi-target multi-hypothesis tracking approach. The method neither needs a ground plane assumption nor relies on background learning.The results from experiments in populated urban environments demonstrate 3D tracking and highly robust people detection up to 20 m with equal error rates of at least 93%. I. IPeople detection and tracking is a key skill for mobile robots and intelligent cars in populated environments. While most of the related work in this area used vision for this task, range sensing is a particularly interesting sensor modality due to its accuracy, large field of view and robustness with respect to illumination changes and vibrations, the latter points being of particular relevance for mobile observers.In this paper we address two problems, detecting people in 3D range data and tracking people in 3D space. We extend our previous work on 3D people detection [1] by the tracking stage and an additional top-down procedure in the detection pipeline. This procedure aims at reducing false positives that typically occur with sparsely sampled individuals at large distances from the sensor. We further combine detection with tracking and present results from a tracker this is able to estimate the motion state of multiple people in 3D. To this end, we employ a multi-hypothesis tracking approach (MHT) by Reid [2] and Cox et al. [3]. In the experiments we compare our approach with related techniques for detection in 3D range data, in particular spin images [4] and templatebased classification.While there is little related work for people detection and tracking in 3D, many researchers addressed this task using 2D range data. In early works [5], [6], [7], people are detected using ad-hoc classifiers, looking for moving local minima in the scan. The first principled learning approach has been taken by Arras et al. [8] where a classifier for 2D point clouds has been learned by boosting a set of geometric and statistical features. As there is a natural performance limit when using only a single layer of 2D range data, several authors ha...
People detection and tracking are important in many situations where robots and humans work and live together. But unlike targets in traditional tracking problems, people typically move and act under the constraints of their environment. The probabilities and frequencies for when people appear, disappear, walk or stand are not uniform but vary over space making human behavior strongly place-dependent. In this paper we present a model that encodes spatial priors on human behavior and show how the model can be incorporated into a people-tracking system. We learn a non-homogeneous spatial Poisson process that improves data association in a multi-hypothesis target tracker through more informed probability distributions over hypotheses. We further present a place-dependent motion model whose predictions follow the spaceusage patterns that people take and which are described by the learned spatial Poisson process. Large-scale experiments in different indoor and outdoor environments using laser range data, demonstrate how both extensions lead to more accurate tracking behavior in terms of data-association errors and number of track losses. The extended tracker is also slightly more efficient than the baseline approach. The system runs in real-time on a typical desktop computer.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.