Abstract:Simultaneous localization, mapping and moving object tracking (SLAMMOT) involves both simultaneous localization and mapping (SLAM) in dynamic environments and detecting and tracking these dynamic objects. In this paper, we establish a mathematical framework to integrate SLAM and moving object tracking. We describe two solutions: SLAM with generalized objects, and SLAM with detection and tracking of moving objects (DATMO). SLAM with generalized objects calculates a joint posterior over all generalized objects a… Show more
“…The Robot Object Mapping Algorithm [8] detects moveable objects by detecting differences in the maps built by SLAM at different times. Detection and Tracking of Moving Objects [9] is an approach that seeks to detect and track moving objects while performing SLAM. Relational Object Maps [10] reasons about spatial relationships between objects in a map.…”
Abstract. Robot localization and mapping algorithms commonly represent the world as a static map. In reality, human environments consist of many movable objects like doors, chairs and tables. Recognizing that such environment often have a large number of instances of a small number of types of objects, we propose an alternative approach, Model-Instance Object Mapping that reasons about the models of objects distinctly from their different instances. Observations classified as short-term features by Episodic non-Markov Localization are clustered to detect object instances. For each object instance, an occupancy grid is constructed, and compared to every other object instance to build a directed similarity graph. Common object models are discovered as strongly connected components of the graph, and their models as well as distribution of instances saved as the final Model-Instance Object Map. By keeping track of the poses of observed instances of object models, Model-Instance Object Maps learn the most probable locations for commonly observed object models. We present results of Model-Instance Object Mapping over the course of a month in our indoor office environment, and highlight the common object models thus learnt in an unsupervised manner.
“…The Robot Object Mapping Algorithm [8] detects moveable objects by detecting differences in the maps built by SLAM at different times. Detection and Tracking of Moving Objects [9] is an approach that seeks to detect and track moving objects while performing SLAM. Relational Object Maps [10] reasons about spatial relationships between objects in a map.…”
Abstract. Robot localization and mapping algorithms commonly represent the world as a static map. In reality, human environments consist of many movable objects like doors, chairs and tables. Recognizing that such environment often have a large number of instances of a small number of types of objects, we propose an alternative approach, Model-Instance Object Mapping that reasons about the models of objects distinctly from their different instances. Observations classified as short-term features by Episodic non-Markov Localization are clustered to detect object instances. For each object instance, an occupancy grid is constructed, and compared to every other object instance to build a directed similarity graph. Common object models are discovered as strongly connected components of the graph, and their models as well as distribution of instances saved as the final Model-Instance Object Map. By keeping track of the poses of observed instances of object models, Model-Instance Object Maps learn the most probable locations for commonly observed object models. We present results of Model-Instance Object Mapping over the course of a month in our indoor office environment, and highlight the common object models thus learnt in an unsupervised manner.
“…A number of approaches focused on the use of vision exclusively (Zielke et al, 1993;Dickmanns, 1998;Dellaert and Thorpe, 1998), whereas others utilized laser range finders (Zhao and Thorpe, 1998;Streller et al, 2002;Wang et al, 2007) sometimes in combination with vision (Wender and Dietmayer, 2008). We give an overview of prior art in Sect.…”
Section: Introductionmentioning
confidence: 99%
“…Zhao and Thorpe (1998); Streller et al (2002); Wang (2004); Wender and Dietmayer (2008)) including most recent developments by the UGC participants (Darms et al, 2008;Leonard et al, 2008). Typically these approaches proceed in three stages: data segmentation, data association, and Bayesian filter update.…”
Situational awareness is crucial for autonomous driving in urban environments. This paper describes the moving vehicle detection and tracking module that we developed for our autonomous driving robot Junior. The robot won second place in the Urban Grand Challenge, an autonomous driving race organized by the U.S. Government in 2007. The module provides reliable detection and tracking of moving vehicles from a high-speed moving platform using laser range finders. Our approach models both dynamic and geometric properties of the tracked vehicles and estimates them using a single Bayes filter per vehicle. We present the notion of motion evidence, which allows us to overcome the low signal-to-noise ratio that arises during rapid detection of moving vehicles in noisy urban environments. Furthermore, we show how to build consistent and efficient 2D representations out of 3D range data and how to detect poorly visible black vehicles. Experimental validation includes the most challenging conditions presented at the Urban Grand Challenge as well as other urban settings.
“…We do not consider the full SLAM problem here, but instead work in a simulation of CrunchBot having zero odometry noise to avoid the localisation problem and focus on mapping only. Related object-based mapping models have recently appeared [51,23,46,40] using laser sensors to recognise and learn complex but nonhierarchical spatial models. However as data available through whiskers to CrunchBot is much sparser than that from laser scanners, the required level of sensor detail is unavailable, therefore we compensate with the new mapping technique of fusing contact reports into hierarchical models.…”
The paradigm case for robotic mapping assumes large quantities of sensory information which allow the use of relatively weak priors. In contrast, the present study considers the mapping problem for a mobile robot, CrunchBot, where only sparse, local tactile information from whisker sensors is available. To compensate for such weak likelihood information, we make use of low-level signal processing and strong hierarchical object priors. Hierarchical models were popular in classical blackboard systems but are here applied in a Bayesian setting as a mapping algorithm. The hierarchical models require reports of whisker distance to contact and of surface orientation at contact, and we demonstrate that this information can be retrieved by classifiers from strain data collected by CrunchBot's physical whiskers. We then provide a demonstration in simulation of how this information can be used to build maps (but not yet full SLAM) in an zero-odometry-noise environment containing walls and table-like hierarchical objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.