A Novel Concept for the Study of Heterogeneous Robotic Swarms warm robotics systems are characterized by decentralized control, limited communication between robots, use of local information, and emergence of global behavior. Such systems have shown their potential for flexibility and robustness [1]-[3]. However, existing swarm robotics systems are by and large still limited to displaying simple proof-of-concept behaviors under laboratory conditions. It is our contention that one of the factors holding back swarm robotics research is the almost universal insistence on homogeneous system components. We believe that swarm robotics designers must embrace heterogeneity if they ever want swarm robotics systems to approach the complexity required of real-world systems. To date, swarm robotics systems have almost exclusively comprised physically and behaviorally undifferentiated agents. This design decision has its roots in ethological models of self-organizing natural systems. These models serve as inspiration for swarm robotics system designers, but are often highly abstract simplifications of natural systems and, to date, have largely assumed homogeneous agents. Selected dynamics of the systems under study are shown to emerge from the interactions of identical system components, ignoring the heterogeneities (physical, spatial, functional, and informational) that one can find in almost any natural system. The field of swarm robotics currently lacks methods and tools with which to study and leverage the heterogeneity that is present in natural systems. To remedy this deficiency, we propose swarmanoid, an innovative swarm robotics system composed of three different robot types with complementary skills: foot-bots are small autonomous robots specialized in moving on both even and uneven terrains, capable of self-assembling and of transporting objects or other robots; hand-bots are autonomous robots capable of climbing some vertical surfaces and manipulating small objects; and eye-bots are autonomous flying robots that can attach to an indoor ceiling, capable of analyzing the environment from a privileged position to S
Abstract. The swarm intelligence paradigm has proven to have very interesting properties such as robustness, flexibility and ability to solve complex problems exploiting parallelism and self-organization. Several robotics implementations of this paradigm confirm that these properties can be exploited for the control of a population of physically independent mobile robots.The work presented here introduces a new robotic concept called swarm-bot in which the collective interaction exploited by the swarm intelligence mechanism goes beyond the control layer and is extended to the physical level. This implies the addition of new mechanical functionalities on the single robot, together with new electronics and software to manage it. These new functionalities, even if not directly related to mobility and navigation, allow to address complex mobile robotics problems, such as extreme all-terrain exploration.The work shows also how this new concept is investigated using a simulation tool (swarmbot3d) specifically developed for quickly designing and evaluating new control algorithms. Experimental work shows how the simulated detailed representation of one s-bot has been calibrated to match the behaviour of the real robot.
This article describes simulations on populations of neural networks that both evolve at the population level and learn at the individual level. Unlike other simulations, the evolutionary task (finding food in the environment) and the learning task (predicting the next position of food on the basis of present position and planned network's movement) are different tasks. In these conditions, learning influences evolution (without Lamarckian inheritance of learned weight changes) and evolution influences learning. Average but not peak fitness has a better evolutionary growth with learning than without learning. After the initial generations, individuals that learn to predict during life also improve their food-finding ability during life. Furthermore, individuals that inherit an innate capacity to find food also inherit an innate predisposition to learn to predict the sensory consequences of their movements. They do not predict better at birth, but they do learn to predict better than individuals of the initial generation given the same learning experience. The results are interpreted in terms of a notion of dynamic correlation between the fitness surface and the learning surface. Evolution succeeds in finding both individuals that have high fitness and individuals that, although they do not have high fitness at birth, end up with high fitness because they learn to predict.
In this paper, we introduce a self-assembling and self-organizing artifact, called a swarm-bot, composed of a swarm of s-bots, mobile robots with the ability to connect to and to disconnect from each other. We discuss the challenges involved in controlling a swarm-bot and address the problem of synthesizing controllers for the swarm-bot using artificial evolution. Specifically, we study aggregation and coordinated motion of the swarm-bot using a physics-based simulation of the system. Experiments, using a simplified simulation model of the s-bots, show that evolution can discover simple but effective controllers for both the aggregation and the coordinated motion of the swarm-bot. Analysis of the evolved controllers shows that they have properties of scalability, that is, they continue to be effective for larger group sizes, and of generality, that is, they produce similar behaviors for configurations different from those they were originally evolved for. The portability of the evolved controllers to real s-bots is tested using a detailed simulation model which has been validated against the real s-bots in a companion paper in this same special issue.
Co-evolution (i.e. the evolution of two or more competing populations with coupled fitness) has several features that may potentially enhance the power of adaptation of artificial evolution. In particular, as discussed by Dawkins and Krebs [3], competing populations may reciprocally drive one another to increasing levels of complexity by producing an evolutionary "arms race". In this paper we will investigate the role of co-evolution in the context of evolutionary robotics. In particular, we will try to understand in what conditions co-evolution can lead to "arms races". Moreover, we will show that in some cases artificial co-evolution has a higher adaptive power than simple evolution. Finally, by analyzing the dynamics of coevolved populations, we will show that in some circumstances well adapted individuals would be better advised to adopt simple but easily modifiable strategies suited for the current competitor strategies rather than incorporate complex and general strategies that may be effective against a wide range of opposing counter-strategies.
The problem of the validity of simulation is particularly relevant for methodologies that use machine learning techniques to develop control systems for autonomous robots, like, for instance, the Artificial Life approach named Evolutionary Robotics. In fact, despite that it has been demonstrated that training or evolving robots in the real environment is possible, the number of trials needed to test the system discourage the use of physical robots during the training period. By evolving neural controllers for a Khepera robot in computer simulations and then transferring the obtained agents in the real environment we will show that: (a) an accurate model of a particular robot-environment dynamics can be built by sampling the real world through the sensors and the actuators of the robot; (b) the performance gap between the obtained behaviors in simulated and real environment may be significantly reduced by introducing a "conservative" form of noise; (c) if a decrease in performance is observed when the system is transferred in the real environment, successful and robust results can be obtained by continuing the evolutionary process in the real environment for few generations.
We present a set of experiments in which simulated robots are evolved for the ability to aggregate and move together toward a light target. By developing and using quantitative indexes that capture the structural properties of the emerged formations, we show that evolved individuals display interesting behavioral patterns in which groups of robots act as a single unit. Moreover, evolved groups of robots with identical controllers display primitive forms of situated specialization and play different behavioral functions within the group according to the circumstances. Overall, the results presented in the article demonstrate that evolutionary techniques, by exploiting the self-organizing behavioral properties that emerge from the interactions between the robots and between the robots and the environment, are a powerful method for synthesizing collective behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.