Developing self-organized swarm systems capable of adapting to environmental changes as well as to dynamic situations is a complex challenge. An efficient labour division model, with the ability to regulate the distribution of work among swarm robots, is an important element of this kind of system. This paper extends the popular Response Threshold Model (RTM) and proposes a new Adaptive Response Threshold Model (ARTM). Experiments were carried out in simulation and in real-robot scenarios with the aim of studying the performance of this new adaptive model. Results presented in this paper verify that the extended approach improves on the adaptability of previous systems. For example, by reducing collision duration among robots in foraging missions, our approach helps small swarms of robots to adapt more efficiently to changing environments, thus increasing their self-sustainability (survival rate). Finally, we propose a minimal version of ARTM, which is derived from the conclusions obtained through real-robot and simulation results.
In the field of human motor control, the motor synergy hypothesis explains how humans simplify body control dimensionality by coordinating groups of muscles, called motor synergies, instead of controlling muscles independently. In most applications of motor synergies to low-dimensional control in robotics, motor synergies are extracted from given optimal control signals. In this paper, we address the problems of how to extract motor synergies without optimal data given, and how to apply motor synergies to achieve low-dimensional task-space tracking control of a human-like robotic arm actuated by redundant muscles, without prior knowledge of the robot. We propose to extract motor synergies from a subset of randomly generated reaching-like movement data. The essence is to first approximate the corresponding optimal control signals, using estimations of the robot's forward dynamics, and to extract the motor synergies subsequently. In order to avoid modeling difficulties, a learning-based control approach is adopted such that control is accomplished via estimations of the robot's inverse dynamics. We present a kernel-based regression formulation to estimate the forward and the inverse dynamics, and a sliding controller in order to cope with estimation error. Numerical evaluations show that the proposed method enables extraction of motor synergies for low-dimensional task-space control.
In this paper we present a new method for generating humanoid robot movements. We propose to merge the intuitiveness of the widely used key-frame technique with the optimization provided by automatic learning algorithms. Key-frame approaches are straightforward but require the user to precisely define the position of each robot joint, a very time consuming task. Automatic learning strategies can search for a good combination of parameters resulting in an effective motion of the robot without requiring user effort. On the other hand their search usually cannot be easily driven by the operator and the results can hardly be modified manually. While the fitness function gives a quantitative evaluation of the motion (e.g. "How far the robot moved?"), it cannot provide a qualitative evaluation, for instance the similarity to the human movements. In the proposed technique the user, exploiting the key-frame approach, can intuitively bound the search by specifying relationships to be maintained between the joints and by giving a range of possible values for easily understandable parameters. The automatic learning algorithm then performs a local exploration of the parameter space inside the defined bounds. Thanks to the clear meaning of the parameters provided by the user, s/he can give qualitative evaluation of the generated motion (e.g. "This walking gait looks odd. Let's raise the knee more") and easily introduce new constraints to the motion. Experimental results proved the approach to be successful in terms of reduction of motion-development time, in terms of natural appearance of the motion, and in terms of stability of the walking.
This paper describes a method of rule extraction for generation of appropriate actions by a robot in a multiparty conversation based on the relative probability of human actions in a similar situation. The proposed method was applied to a dataset collected from multiparty interactions between two robots and one human subject who took on the role of supporting one robot. By computing the relative occurrence probabilities of human actions after the execution of the robots’ actions, twenty rules describing human behavior in such a role were identified. To evaluate the rules, the human role was filled by a new bystander robot and other subjects were asked to report their impressions of video clips in which the bystander robot acted or did not act in accordance with the rules. The reported impressions and a quantitative analysis of the rules suggest that the behavior of listening and the supporting role that the subjects play can be reproduced by a bystander robot acting in accordance with the rules identified by the proposed method.
The use of biologically realistic (brain-like) control systems in autonomous robots offers two potential benefits. For neuroscience, it may provide important insights into normal and abnormal control and decision-making in the brain, by testing whether the computational learning and decision rules proposed on the basis of simple laboratory experiments lead to effective and coherent behaviour in complex environments. For robotics, it may offer new insights into control system designs, for example in the context of threat avoidance and self-preservation. In the brain, learning and decision-making for rewards and punishments (such as pain) are thought to involve integrated systems for innate (Pavlovian) responding, habit-based learning, and goal-directed learning, and these systems have been shown to be well-described by RL models. Here, we simulated this 3-system control hierarchy (in which the innate system is derived from an evolutionary learning model), and show that it reliably achieves successful performance in a dynamic predatoravoidance task. Furthermore, we show situations in which a 3system architecture provides clear advantages over single or dual system architectures. Finally, we show that simulating a computational model of obsessive compulsive disorder, an example of a disease thought to involve a specific deficit in the integration of habit-based and goal-directed systems, can reproduce the results of human clinical experiments. The results illustrate how robotics can provide a valuable platform to test the validity and utility of computational models of human behaviour, in both health and disease. They also illustrate how bio-inspired control systems might usefully inform selfpreservative behaviour in autonomous robots, both in normal and malfunctioning situations.
This paper investigates touching as a natural way for humans to communicate with robots. In particular we developed a system to edit motions of a small humanoid robot by touching its body parts. This interface has two purposes: it allows the user to develop robot motions in a very intuitive way, and it allows us to collect data useful for studying the characteristics of touching as a means of communication. Experimental results confirm the interface's ease of use for inexpert users, and analysis of the data collected during human-robot teaching episodes has yielded several useful insights.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.