Abstract-Multi-Robot Systems (MRS) are, nowadays, an important research area within Robotics and Artificial Intelligence and a growing number of systems has been recently presented in the literature. Since application domains and tasks that are faced by MRS are of increasing complexity, the ability of the robots to cooperate can be regarded as a fundamental feature. In this paper, we present a survey of the recent work in the area by specifically examining the forms of cooperation and coordination realized in the MRS. In particular, we propose a new taxonomy for classification of the approaches to coordination in MRS and we describe some systems, which we consider representative in our taxonomy. We finally discuss the outcomes of our analysis and try to highlight future trends of the research on MRS.
Please cite this article in press as: L. Iocchi et al., ROBOCUP@HOME: Analysis and results of evolving competitions for domestic and service robots, Artificial Intelligence (2015), http://dx. AbstractScientific competitions are becoming more common in many research areas of artificial intelligence and robotics, since they provide a shared testbed for comparing different solutions and enable the exchange of research results. Moreover, they are interesting for general audiences and industries. Currently, many major research areas in artificial intelligence and robotics are organizing multiple-year competitions that are typically associated with scientific conferences.One important aspect of such competitions is that they are organized for many years. This introduces a temporal evolution that is interesting to analyze. However, the problem of evaluating a competition over many years remains unaddressed. We believe that this issue is critical to properly fuel changes over the years and measure the results of these decisions. Therefore, this article focuses on the analysis and the results of evolving competitions.In this article, we present the RoboCup@Home competition, which is the largest worldwide competition for domestic service robots, and evaluate its progress over the past seven years. We show how the definition of a proper scoring system allows for desired functionalities to be related to tasks and how the resulting analysis fuels subsequent changes to achieve general and robust solutions implemented by the teams. Our results show not only the steadily increasing complexity of the tasks that RoboCup@Home robots can solve but also the increased performance for all of the functionalities addressed in the competition.We believe that the methodology used in RoboCup@Home for evaluating competition advances and for stimulating changes can be applied and extended to other robotic competitions as well as to multi-year research projects involving Artificial Intelligence and Robotics.
"Exploration and search" is a typical task for autonomous robots performing in rescue missions, specifically addressing the problem of exploring the environment and at the same time searching for interesting features within the environment. In this paper, we model this problem as a multi-objective exploration and search problem and present a prototype system, featuring a strategic level, which can be used to adapt the task of exploration and search to specific rescue missions. Specifically, we make use of high-level representation of the robot plans through a Petri Net formalism that allows representing in a coherent framework decisions, loops, interrupts due to unexpected events or action failures, concurrent actions, and action synchronization. While autonomous exploration has been investigated in the past, we specifically focus on the problem of searching interesting features in the environment during the map building process. We discuss performance evaluation of exploration and search strategies for rescue robots, by using an effective performance metric, and present evaluation of our system through a set of experiments.
Automated planning and reinforcement learning are characterized by complementary views on decision making: the former relies on previous knowledge and computation, while the latter on interaction with the world, and experience. Planning allows robots to carry out different tasks in the same domain, without the need to acquire knowledge about each one of them, but relies strongly on the accuracy of the model. Reinforcement learning, on the other hand, does not require previous knowledge, and allows robots to robustly adapt to the environment, but often necessitates an infeasible amount of experience. We present Domain Approximation for Reinforcement LearnING (DARLING), a method that takes advantage of planning to constrain the behavior of the agent to reasonable choices, and of reinforcement learning to adapt to the environment, and increase the reliability of the decision making process. We demonstrate the effectiveness of the proposed method on a service robot, carrying out a variety of tasks in an office building. We find that when the robot makes decisions by planning alone on a given model it often fails, and when it makes decisions by reinforcement learning alone it often cannot complete its tasks in a reasonable amount of time. When employing DARLING, even when seeded with the same model that was used for planning alone, however, the robot can quickly learn a behavior to carry out all the tasks, improves over time, and adapts to the environment as it changes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.