Many modern sensors used for mapping produce 3D point clouds, which are typically registered together using the iterative closest point (icp) algorithm. Because icp has many variants whose performances depend on the environment and the sensor, hundreds of variations have been published. However, no comparison frameworks are available leading to arduous selection of an appropriate variant for particular experimental conditions. The first contribution of this paper consists of a protocol that allows for a comparison between icp variants, taking into account a broad range of inputs. The second contribution is an open-source icp library, which is fast enough to be usable in multiple real-world applications, while being modular enough to ease comparison This work was supported by the EU FP7 IP projects Natural Human-Robot Cooperation in Dynamic Environments (ICT-247870) and myCopter (FP7-AAT-2010-RTD-1). F. Pomerleau was supported by a fellowship from the Fonds québécois de recherche sur la nature et les technologies (FQRNT).
International audienceThe number of registration solutions in the literature has bloomed recently. The iterative closest point, for example, could be considered as the backbone of many laser-based localization and mapping systems. Although they are widely used, it is a common challenge to compare registration solutions on a fair base. The main limitation is to overcome the lack of accurate ground truth in current data sets, which usually cover environments only over a small range of organization levels. In computer vision, the Stanford 3D Scanning Repository pushed forward point cloud registration algorithms and object modeling fields by providing high-quality scanned objects with precise localization. We aim to provide similar high-caliber working material to the robotic and computer vision communities but with sceneries instead of objects. We propose eight point cloud sequences acquired in locations covering the environment diversity that modern robots are susceptible to encounter, ranging from inside an apartment to a woodland area. The core of the data sets consists of 3D laser point clouds for which supporting data (Gravity, Magnetic North and GPS) are given for each pose. A special effort has been made to ensure global positioning of the scanner within mm-range precision, independent of environmental conditions. This will allow for the development of improved registration algorithms when mapping challenging environments, such as those found in real-world situations
International audienceNew applications of mobile robotics in dynamic urban areas require more than the single-session geometric maps that have dominated simultaneous localization and mapping (SLAM) research to date; maps must be updated as the environment changes and include a semantic layer (such as road network information) to aid motion planning in dynamic environments. We present an algorithm for long-term localization and mapping in real time using a three-dimensional (3D) laser scanner. The system infers the static or dynamic state of each 3D point in the environment based on repeated observations. The velocity of each dynamic point is estimated without requiring object models or explicit clustering of the points. At any time, the system is able to produce a most-likely representation of underlying static scene geometry. By storing the time history of velocities, we can infer the dominant motion patterns within the map. The result is an online mapping and localization system specifically designed to enable long-term autonomy within highly dynamic environments. We validate the approach using data collected around the campus of ETH Zurich over seven months and several kilometers of navigation. To the best of our knowledge, this is the first work to unify long-term map update with tracking of dynamic objects
The paper describes experience with applying a user-centric design methodology in developing systems for human-robot teaming in Urban Search and Rescue. A human-robot team consists of several semi-autonomous robots (rovers/UGVs, microcopter/UAVs), several humans at an off-site command post (mission commander, UGV operators) and one on-site human (UAV operator). This system has been developed in close cooperation with several rescue organizations, and has been deployed in a real-life tunnel accident use case. The human-robot team jointly explores an accident site, communicating using a multi-modal team interface, and spoken dialogue. The paper describes the development of this complex socio-technical system per se, as well as recent experience in evaluating the performance of this system
This paper describes our experience in designing, developing and deploying systems for supporting human-robot teams during disaster response. It is based on R&D performed in the EU-funded project NIFTi. NIFTi aimed at building intelligent, collaborative robots that could work together with humans in exploring a disaster site, to make a situational assessment. To achieve this aim, NIFTi addressed key scientific design aspects in building up situation awareness in a human-robot team, developing systems using a user-centric methodology involving end users throughout the entire R&D cycle, and regularly deploying implemented systems under real-life circumstances for experimentation and testing. This has yielded substantial scientific advances in the state-of-the-art in robot mapping, robot autonomy for operating in harsh terrain, collaborative planning, and human-robot interaction. NIFTi deployed its system in actual disaster response activities in Northern Italy, in July 2012, aiding in structure damage assessment.
International audienceIn the context of environment reconstruction for inspection, it is important to handle sensor noise properly to avoid distorted representations. A short survey of available sensors is realize to help their selection based on the payload capability of a robot. We then propose uncertainty models based on empirical results for three models of laser rangefinders: Hokuyo URG-04LX, UTM-30LX and the Sick LMS-151. The methodology, used to characterize those sensors, targets more specifically different metallic materials which often give distorted images due to reflexion. We also evaluate the impact of sensor noise on surface normal vector reconstruction and conclude with observations about the impact of sunlight and reflexions
Low throughput user interface Bayesian programming Brain-computer interface Neurorobotics EEG Error-related potentials a b s t r a c t This paper presents a novel semi-autonomous navigation strategy designed for low throughput interfaces. A mobile robot (e.g. intelligent wheelchair) proposes the most probable action, as analyzed from the environment, to a human user who can either accept or reject the proposition. In the case of refusal, the robot will propose another action, until both entities agree on what needs to be done.In an unknown environment, the robotic system first extracts features so as to recognize places of interest where a human-robot interaction should take place (e.g. crossings). Based on the local topology, relevant actions are then proposed, the user providing answers by means of a button or a brain-computer interface (BCI). Our navigation strategy is successfully tested both in simulation and with a real robot, and a feasibility study for the use of a BCI confirms the potential of such an interface.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.