International audienceWe present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans. This information includes the rooms, their connectivity, the objects they contain and the material of the walls and ground. This robot was developed in order to participate in a French exploration and mapping contest called CAROTTE whose goal is to produce easily interpretable maps of an unknown environment. In particular we present our object detection approach based on a color+depth camera that fuse 3D, color and texture information through a neural network for robust object recognition. We also present the material recognition approach based on machine learning applied to vision. We demonstrate the performances of these modules on image databases and provide examples on the full system working in real environments
Abstract-The human perception of the external world appears as a natural, immediate and effortless task. It is achieved through a number of "low-level" sensory-motor processes that provide a high-level representation adapted to complex reasoning and decision. Compared to these representations, mobile robots usually provide only low-level obstacle maps that lack such highlevel information. We present a mobile robot whose goal is to autonomously explore an unknown indoor environment and to build a semantic map containing high-level information similar to those extracted by humans and that will be rapidly and easily interpreted by users to assess the situation. This robot was developed under the Panoramic and Active Camera for Object Mapping (PACOM) 1 project whose goal is to participate in a French exploration and mapping contest called CAROTTE 2 . We will detail in particular how we integrated visual object recognition, room detection, semantic mapping, and exploration. We demonstrate the performances of our system in an indoor environment.
International audienceWe address the problems of localization, mapping, and guidance for robots with limited computational resources by combining vision with the metrical information given by the robot odometry. We propose in this article a novel light and robust topometric simultaneous localization and mapping framework using appearance-based visual loop-closure detection enhanced with the odometry. The main advantage of this combination is that the odometry makes the loop-closure detection more accurate and reactive, while the loop-closure detection enables the long-term use of odometry for guidance by correcting the drift. The guidance approach is based on qualitative localization using vision and odometry, and is robust to visual sensor occlusions or changes in the scene. The resulting framework is incremental, real-time, and based on cheap sensors provided on many robots (a camera and odometry encoders). This approach is, moreover, particularly well suited for low-power robots as it is not dependent on the image processing frequency and latency, and thus it can be applied using remote processing. The algorithm has been validated on a Pioneer P3DX mobile robot in indoor environments, and its robustness is demonstrated experimentally for a large range of odometry noise levels
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.