HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
This paper proposes an environmental representation approach based on hybrid metric and topological maps as a key component for mobile robot navigation. Focus is made on an ego-centric pose graph structure by the use of Keyframes to capture the local properties of the scene. With the aim of reducing data redundancy, suppress sensor noise whilst maintaining a dense compact representation of the environment, neighbouring augmented spheres are fused in a single representation. To this end, an uncertainty error model propagation is formulated for outlier rejection and data fusion, enhanced with the notion of landmark stability over time. Finally, our algorithm is tested thoroughly on a newly developed wide angle 360 0 field of view (FOV) spherical sensor where improvements such as trajectory drift, compactness and reduced tracking error are demonstrated.
Normal segmentation of geometric range data has been a common practice integrated in the building blocks of point cloud registration. Most wellknown point to plane and plane to plane state-of-the-art registration techniques make use of normal features to ensure a better alignment. However, the latter is influenced by noise, pattern scanning and difference in densities. Consequently, the resulting normals in both a source point cloud and a target point cloud will not be perfectly adapted, thereby influencing the alignment process, due to weak inter surface correspondences. In this paper, a novel approach is introduced, exploiting normals differently, by clustering points of the same surface into one topological pattern and replacing all the points held by this model by one representative point. These particular points are then used for the association step of registration instead of directly injecting all the points with their extracted normals. In our work, normals are only used to distinguish different local surfaces and are ignored for later stages of point cloud alignment. This approach enables us to overcome two major shortcomings; the problem of correspondences in different point cloud densities, noise inherent in sensors leading to noisy normals. In so doing, improvement on the convergence domain between two reference frames tethered to two dissimilar depth sensors is considerably improved leading to robust localization. Moreover, our approach increases the precision as well as the computation time of the alignment since matching is performed on a reduced set of points. Finally, these claims are backed up by experimental proofs on real data to demonstrate the robustness and the efficiency of the proposed approach.
Abstract:Visual mapping is a required capability for practical autonomous mobile robots where there exists a growing industry with applications ranging from the service to industrial sectors. Prior to map building, Visual Odometry(VO) is an essential step required in the process of pose graph construction. In this work, we first propose to tackle the pose estimation problem by using both photometric and geometric information in a direct RGBD image registration method. Secondly, the mapping problem is tackled with a pose graph representation, whereby, given a database of augmented visual spheres, a travelled trajectory with redundant information is pruned out to a skeletal pose graph. Both methods are evaluated with data acquired with a recently proposed omnidirectional RGBD sensor for indoor environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.