Classification of spatial regions based on semantic information in an indoor environment enables robot tasks such as navigation or mobile manipulation to be spatially aware. The availability of contextual information can significantly simplify operation of a mobile platform. We present methods for 3utomHted recognition and classification of SI)aceS into separHte semantic regions and use of sueh information for generHtion of a topological map of an environment. The association of semantic labels with spatial regions is based on Humal/ Augmented Mappil/g. T he methods presented in this paper are evalu ated both in simu lation and on real data acquired from an office environment.
Situational awareness in rescue operations can be provided by teams of autonomous mobile robots. Human operators are required to teleoperate the current generation of mobile robots for such applications; however, teleoperation is increasingly difficult as the number of robots is expanded. As the number of robots is increased, each robot may also interfere with one another and eventually decrease mapping performance. As presented here, through careful consideration of robot team coordination and exploration strategy, large numbers of mobile robots can be allocated to accomplish the mapping task more quickly and accurately. We present both the coordination and exploration strategies and present results from experiments in simulation as well as with up to nine mobile platforms.
This paper revisits Kimera-Multi, a distributed multirobot Simultaneous Localization and Mapping (SLAM) system, towards the goal of deployment in the real world. In particular, this paper has three main contributions. First, we describe improvements to Kimera-Multi to make it resilient to large-scale real-world deployments, with particular emphasis on handling intermittent and unreliable communication. Second, we collect and release challenging multi-robot benchmarking datasets obtained during live experiments conducted on the MIT campus, with accurate reference trajectories and maps for evaluation. The datasets include up to 8 robots traversing long distances (up to 8 km) and feature many challenging elements such as severe visual ambiguities (e.g., in underground tunnels and hallways), mixed indoor and outdoor trajectories with different lighting conditions, and dynamic entities (e.g., pedestrians and cars). Lastly, we evaluate the resilience of Kimera-Multi under different communication scenarios, and provide a quantitative comparison with a centralized baseline system. Based on the results from both live experiments and subsequent analysis, we discuss the strengths and weaknesses of Kimera-Multi, and suggest future directions for both algorithm and system design. We release the source code of Kimera-Multi and all datasets to facilitate further research towards the reliable real-world deployment of multi-robot SLAM systems.
Complex and structured landmarks like objects have many advantages over low-level image features for semantic mapping. Low level features such as image corners suffer from occlusion boundaries, ambiguous data association, imaging artifacts, and viewpoint dependance. Artificial landmarks are an unsatisfactory alternative because they must be placed in the environment solely for the robot's benefit. Human environments contain many objects which can serve as suitable landmarks for robot navigation such as signs, objects, and furniture. Maps based on high level features which are identified by a learned classifier could better inform tasks such as semantic mapping and mobile manipulation. In this paper we present a technique for recognizing door signs using a learned classifier as one example of this approach, and demonstrate their use in a graphical SLAM framework with data association provided by reasoning about the semantic meaning of the sign.
In recent years, there has been a rapid increase in the number of service robots deployed for aiding people in their daily activities. Unfortunately, most of these robots require human input for training in order to do tasks in indoor environments. Successful domestic navigation often requires access to semantic information about the environment, which can be learned without human guidance. In this paper, we propose a set of DEDUCE 1 -Diverse scEne Detection methods in Unseen Challenging Environments algorithms which incorporate deep fusion models derived from scene recognition systems and object detectors. The five methods described here have been evaluated on several popular recent image datasets, as well as real-world videos acquired through multiple mobile platforms. The final results show an improvement over the existing stateof-the-art visual place recognition systems.
The goal of simultaneous localization and mapping (SLAM) is to compute the posterior distribution over landmark poses. Typically, this is made possible through the static world assumption -the landmarks remain in the same location throughout the mapping procedure. Some prior work has addressed this assumption by splitting maps into static and dynamic sets, or by recognizing moving landmarks and tracking them. In contrast to previous work, we apply an Expectation Maximization technique to a graph based SLAM approach and allow landmarks to be dynamic. The batch nature of this operation enables us to detect moveable landmarks and factor them out of the map. We demonstrate the performance of this algorithm with a series of experiments with moveable landmarks in a structured environment.
Situational awareness in rescue operations can be provided by teams of autonomous mobile robots. Human operators are required to teleoperate the current generation of mobile robots for this application; however, teleoperation is increasingly difficult as the number of robots is expanded. As the number of robots is increased, each robot may interfere with one another and eventually decrease mapping performance. Through careful consideration of robot team coordination and exploration strategy, large numbers of mobile robots be allocated to accomplish the mapping task more quickly and accurately.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.