Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (icp) algorithm. Because icp has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available leading to arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between icp variants, taking into account a broad range of inputs. The second contribution is an open-source icp library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison This work was supported by the EU FP7 IP projects Natural Human-Robot Cooperation in Dynamic Environments (ICT-247870) and myCopter (FP7-AAT-2010-RTD-1). F. Pomerleau was supported by a fellowship from the Fonds québécois de recherche sur la nature et les technologies (FQRNT).
International audienceThe number of registration solutions in the literature has bloomed recently. The iterative closest point, for example, could be considered as the backbone of many laser-based localization and mapping systems. Although they are widely used, it is a common challenge to compare registration solutions on a fair base. The main limitation is to overcome the lack of accurate ground truth in current data sets, which usually cover environments only over a small range of organization levels. In computer vision, the Stanford 3D Scanning Repository pushed forward point cloud registration algorithms and object modeling fields by providing high-quality scanned objects with precise localization. We aim to provide similar high-caliber working material to the robotic and computer vision communities but with sceneries instead of objects. We propose eight point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area. The core of the data sets consists of 3D laser point clouds for which supporting data (Gravity, Magnetic North and GPS) are given for each pose. A special effort has been made to ensure global positioning of the scanner within mm-range precision, independent of environmental conditions. This will allow for the development of improved registration algorithms when mapping challenging environments, such as those found in real-world situations
International audienceNew applications of mobile robotics in dynamic urban areas require more than the single-session geometric maps that have dominated simultaneous localization and mapping (SLAM) research to date; maps must be updated as the environment changes and include a semantic layer (such as road network information) to aid motion planning in dynamic environments. We present an algorithm for long-term localization and mapping in real time using a three-dimensional (3D) laser scanner. The system infers the static or dynamic state of each 3D point in the environment based on repeated observations. The velocity of each dynamic point is estimated without requiring object models or explicit clustering of the points. At any time, the system is able to produce a most-likely representation of underlying static scene geometry. By storing the time history of velocities, we can infer the dominant motion patterns within the map. The result is an online mapping and localization system specifically designed to enable long-term autonomy within highly dynamic environments. We validate the approach using data collected around the campus of ETH Zurich over seven months and several kilometers of navigation. To the best of our knowledge, this is the first work to unify long-term map update with tracking of dynamic objects
Autonomous mobile robots are increasingly employed to take measurements for environmental monitoring, but planning informative, measurement‐rich paths through large three‐dimensional environments is still challenging. Designing such paths, known as the informative path planning (IPP) problem, has been shown to be NP‐hard. Existing algorithms focus on providing guarantees on suboptimal solutions, but do not scale well to large problems. In this paper, we introduce a novel IPP algorithm that uses an evolutionary strategy to optimize a parameterized path in continuous space, which is subject to various constraints regarding path budgets and motion capabilities of an autonomous mobile robot. Moreover, we introduce a replanning scheme to adapt the planned paths according to the measurements taken in situ during data collection. When compared to two state‐of‐the‐art solutions, our method provides competitive results at significantly lower computation times and memory requirements. The proposed replanning scheme enables to build models with up to 25% lower uncertainty within an initially unknown area of interest. Besides presenting theoretical results, we tailored the proposed algorithms for data collection using an autonomous surface vessel for an ecological study, during which the method was validated through three field deployments on Lake Zurich, Switzerland. Spatiotemporal variations are shown over a period of three months and in an area of 350 m × 350 m × 13 m. Whereas our theoretical solution can be applied to multiple applications, our field results specifically highlight the effectiveness of our planner for monitoring toxic microorganisms in a pre‐alpine lake, and for identifying hot‐spots within their distribution.
The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search and Rescue. A human-robot team consists of several semi-autonomous robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV operators) and one on-site human (UAV operator). This system has been developed in close cooperation with several rescue organizations, and has been deployed in a real-life tunnel accident use case. The human-robot team jointly explores an accident site, communicating using a multi-modal team interface, and spoken dialogue. The paper describes the development of this complex socio-technical system per se, as well as recent experience in evaluating the performance of this system
Abstract-Robotic sensors are promising instruments for monitoring spatial phenomena. Oftentimes, rather than aiming to achieve low prediction error everywhere, one is interested in determining whether the phenomenon exhibits certain critical behavior. In this paper, we consider the problem of focusing autonomous sampling to determine whether and where the sensed spatial field exceeds a given threshold value. We introduce a receding horizon path planner, LSE-DP, which plans efficient paths for sensing in order to reduce our uncertainty specifically around the threshold value. We report fully autonomous field experiments with an Autonomous Surface Vessel (ASV) in an aquatic monitoring setting, which demonstrate the effectiveness of the proposed method. LSE-DP is able to reduce the uncertainty around the threshold value of interest to 68% when compared to non-adaptive methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.