Thanks to the efforts of the robotics and autonomous systems community, robots are becoming ever more capable. There is also an increasing demand from end-users for autonomous service robots that can operate in real environments for extended periods. In the STRANDS project we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots, and deploying these systems for long-term installations in security and care environments. Over four deployments, our robots have been operational for a combined duration of 104 days autonomously performing end-user defined tasks, covering 116km in the process. In this article we describe the approach we have used to enable long-term autonomous operation in everyday environments, and how our robots are able to use their long run times to improve their own performance
Abstract-We present a method for introducing representation of dynamics into environment models that were originally tailored to represent static scenes. Rather than using a fixed probability value, the method models the uncertainty of the elementary environment states by probabilistic functions of time. These are composed of combinations of harmonic functions, which are obtained by means of frequency analysis. The use of frequency analysis allows to integrate long-term observations into memoryefficient spatio-temporal models that reflect the mid-to longterm environment dynamics. These frequency-enhanced spatiotemporal models allow to predict the future environment states, which improves the efficiency of mobile robot operation in changing environments. In a series of experiments performed over periods of days to years, we demonstrate that the proposed approach improves localization, path planning and exploration.
Abstract-In this work, a study of several laser-based 2D Simultaneous Localization and Mapping (SLAM) techniques available in Robot Operating System (ROS) is conducted. All the approaches have been evaluated and compared in 2D simulations and real world experiments. In order to draw conclusions on the performance of the tested techniques, the experimental results were collected under the same conditions and a generalized performance metric based on the k-nearest neighbors concept was applied. Moreover, the CPU load of each technique is examined.This work provides insight on the weaknesses and strengths of each solution. Such analysis is fundamental to decide which solution to adopt according to the properties of the intended final application.
Abstract-We propose a new idea for life-long mobile robot spatio-temporal exploration of dynamic environments. Our method assumes that the world is subject to constant change, which adds an extra, temporal dimension to the explored space and makes the exploration task a never-ending data-gathering process. To create and maintain a spatio-temporal model of a dynamic environment, the robot has to determine not only where, but also when to perform observations. We address the problem by application of information-theoretic exploration to world representations that model the environment states' uncertainties as probabilistic functions of time.We compare the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months and show that combination of dynamic environment representations with information-gain exploration principles allow to create and maintain up-to-date models of constantly changing environments.
Abstract-This paper presents an exploration method that allows mobile robots to build and maintain spatio-temporal models of changing environments. The assumption of a perpetuallychanging world adds a temporal dimension to the exploration problem, making spatio-temporal exploration a never-ending, life-long learning process. We address the problem by application of information-theoretic exploration methods to spatio-temporal models that represent the uncertainty of environment states as probabilistic functions of time. This allows to predict the potential information gain to be obtained by observing a particular area at a given time, and consequently, to decide which locations to visit and the best times to go there.To validate the approach, a mobile robot was deployed continuously over 5 consecutive business days in a busy office environment. The results indicate that the robot's ability to spot environmental changes improved as it refined its knowledge of the world dynamics.
We present a study of spatio-temporal environment representations and exploration strategies for long-term deployment of mobile robots in real-world, dynamic environments.\ud We propose a new concept for life-long mobile robot spatio-temporal exploration that aims at building, updating and maintaining the environment model during the long-term deployment.\ud The addition of the temporal dimension to the explored space makes the exploration task a never-ending data-gathering process, which we address by application of information-theoretic exploration techniques to world representations that model the uncertainty of environment states as probabilistic functions of time.\ud We evaluate the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months.\ud The combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of continuously changing environments, enabling efficient and self-improving long-term operation of mobile robots
Abstract-We present a real-time visual-based road following method for mobile robots in outdoor environments. The approach combines an image processing method, that allows to retrieve illumination invariant images, with an efficient path following algorithm. The method allows a mobile robot to autonomously navigate along pathways of different types in adverse lighting conditions using monocular vision.To validate the proposed method, we have evaluated its ability to correctly determine boundaries of pathways in a challenging outdoor dataset. Moreover, the method's performance was tested on a mobile robotic platform that autonomously navigated long paths in urban parks. The experiments demonstrated that the mobile robot was able to identify outdoor pathways of different types and navigate through them despite the presence of shadows that significantly influenced the paths' appearance.
Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.