Abstract-We consider the problem of team-based robot mapping and localization using wireless signals broadcast from access points embedded in today's urban environments. We map and localize in an unknown environment, where the access points' locations are unspecified and for which training data is a priori unavailable. Our approach is based on an heterogeneous method combining robots with different sensor payloads. The algorithmic design assumes the ability of producing a map in real-time from a sensor-full robot that can quickly be shared by sensor-deprived robot team members. More specifically, we cast WiFi localization as classification and regression problems that we subsequently solve using machine learning techniques. In order to produce a robust system, we take advantage of the spatial and temporal information inherent in robot motion by running Monte Carlo Localization on top of our regression algorithm, greatly improving its effectiveness. A significant amount of experiments are performed and presented to prove the accuracy, effectiveness, and practicality of the algorithm. I. INTRODUCTIONAs a result of the evident necessity for robots to localize and map unknown environments, a tremendous amount of research has focused on implementing these primordial abilities. Localization problems have been extensively studied and a variety of solutions have been proposed, each assuming different sensors, robotic platforms, and scenarios. The increasingly popular trend of employing low-cost multi-robot teams [14], as opposed to a single expensive robot, provides additional constraints and challenges that have received less attention. A tradeoff naturally arises, because reducing the number of sensors will effectively decrease the robots' price while making the localization problem more challenging. We anticipate that team-based robots will require WiFi technology to exchange information between each other. We also foresee robots will continue to supply rough estimations of local movements, via odometry or similar inexpensive low accuracy sensors. These team-based robots have the advantage of being very affordable. It is clear, however, that these robots would not be practical in unknown environments due to their lack of perception abilities and, as such, we embrace an heterogeneous setup pairing a lot of these simple robots with a single robot capable of mapping an environment by traditional means (e.g., SLAM using a laser range finder or other sophisticated proximity sensors). Within this scenario, our goal is to produce a map of an unknown environment in real-time using the more capable robot, so that the less sophisticated robots can localize themselves.Given the sensory constraints imposed on the robots, we exploit wireless signals from Access Points (APs) that have
We present a system that enables multiple heterogenous mobile robots to build and share an appearance based map appropriate for indoor navigation using exclusively monocular vision. Robots incrementally create online an appearance based model based on SIFT descriptors. The spatial model is enriched with additional information so that the map can be used for navigation also by robots different from those that built it. Once the map is available, navigation is performed using an approach based on epipolar geometry. The control mechanism builds upon the unicycle kinematic model, and assumes robots are equipped with a servoed camera. The validity of the proposed approach is substantiated both in simulation and on an heterogeneous multirobot system. I. MOTIVATION AND CONTRIBUTIONThis paper presents our first steps towards the implementation of a an heterogeneous multi-robot system operating in indoor environment relying only on visual sensors. We show how a team of heterogenous robots can build and take advantage of a spatial model for an unknown environment based exclusively on images taken from monocular cameras. The model is then used to localize and safely navigate to a target location specified as a desired robot view. Notably, and differently from most formerly developed similar approaches, the map is built incrementally and does not require a preliminary data acquisition stage followed by an off-line lengthy map generation process. Our eventual goal is to equip these robots with mapping and navigation abilities comparable to those displayed by more sophisticated systems using laser range finders. While obviously the spatial model will be different, we strive to reach the same level of autonomy and safety in navigation. We stick to the use of monocular images because monocular cameras are cheap and represent a ready to use tool to exchange high-level information between hand-held devices and robot systems. Therefore, this appears to be a natural way to exchange information between users and robots, or to specify interesting locations for the robot to go. Our work builds upon different contributions made in the past in the fields of visual servoing, mapping, and computer vision, and achieves a new level of competence, namely heterogeneous visual based navigation. The system described in this paper builds from scratch an appearance based map capturing salient visual features detected in the environment explored by the robot. Features inserted into the map are not tied to a specific robot morphology, but are, so to speak, disembodied, inasmuch as they can be interpreted and reused also by robots with a morphology different from G. Erinc and S. Carpin are with the School of Engineering, University of California, Merced (USA). the the one that produced the map. The map built can then be used to localize a robot and also for navigation towards a desired target image. In Section II we shortly describe related literature in the field of spatial modeling using vision. Next, in Section III we present a method that allows ...
Appearance based maps are emerging as an important class of spatial representations for mobile robots. In this paper we tackle the problem of merging together two or more appearance based maps independently built by robots operating in the same environment. Noticing the lack of well accepted metrics to measure the performance of map merging algorithms, we propose to use algebraic connectivity as a metric to assess the advantage gained by merging multiple maps. Next, based on this criterion, we propose an anytime algorithm aiming to quickly identify the more advantageous parts to merge. The system we proposed has been fully implemented and tested in indoor scenarios and shows that our algorithm achieves a convenient tradeoff between accuracy and speed.
Abstract-We propose a map merging algorithm that is capable of merging together heterogeneous maps independently built by different robots. Heterogeneous map merging is a crucially important problem for scenarios where multiple heterogeneous robots collaborate to provide situational awareness in urban search and rescue, patrolling, and explorations tasks, just to name a few. To remedy the lack of uniform representation between heterogeneous map models, we rely on the ubiquitous presence of WiFi signals in today's environments. Our solution consists of three steps. First, the overlap between the heterogeneous maps being merged is determined. Second, metric correspondences between overlapping parts are established. Third, the merging is improved by exploiting the structural properties inherent to graph-based maps. Our proposed system is validated using various occupancy grid and appearance-based maps built in real-world conditions, the results of which confirm its strengths. To the best of our knowledge, this is the first solution to the heterogeneous map merging problem.
In this paper we present a WiFi-based solution to the localization and mapping problem for teams of heterogeneous robots operating in unknown environments. By exploiting wireless signal strengths broadcast from access points, a robot with a large sensor payload creates a WiFi signal map that can then be shared and utilized for localization by sensor-deprived robots. In our approach, WiFi localization is cast as a classification problem. An online clustering algorithm processes incoming WiFi signals that are then incorporated into an online random forest. The algorithm's robustness is increased by a Monte Carlo Localization algorithm whose sensor model exploits the results of the online random forest classification. The proposed algorithm is shown to run in real-time, allowing the robots to operate in completely unknown environments, where a priori information such as a blue-print or the access points' location is unavailable. A comprehensive set of experiments not only compares our approach with other algorithms, but also validates the results across different scenarios covering both indoor and outdoor environments.
We present a set of task-based performance evaluation criteria designed to measure the quality of appearance based maps. Instead of aiming to measure a map's overall goodness, metrics defined in this paper focus on individual tasks, namely localization, planning, and navigation, and the quality of the map with respect to the their successful execution. The performance of a map in terms of localization is measured by the amount of information captured from the environment and the accuracy of this information. The planning metric favors instead maps with high connectivity and measures the validity of these connections. The navigation criterion, on the other hand, computes the robustness and stability associated with the paths that a robot will extract from the map. These metrics are tested on appearance maps created in our lab and their distinctiveness is shown.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.