We address the difficult problem of catching in-flight objects with uneven shapes. This requires the solution of three complex problems: accurate prediction of the trajectory of fastmoving objects, predicting the feasible catching configuration, and planning the arm motion, and all within milliseconds. We follow a programming-by-demonstration approach in order to learn, from throwing examples, models of the object dynamics and arm movement. We propose a new methodology to find a feasible catching configuration in a probabilistic manner. We use the dynamical systems approach to encode motion from several demonstrations. This enables a rapid and reactive adaptation of the arm motion in the presence of sensor uncertainty. We validate the approach in simulation with the iCub humanoid robot and in real-world experiments with the KUKA LWR 4+ (7-degree-of-freedom arm robot) to catch a hammer, a tennis racket, an empty bottle, a partially filled bottle, and a cardboard box.
BackgroundThe ability to follow one another’s gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other’s hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader’s role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view).Methodology/Principal Findings43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar’s realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower’s ability to predict the avatar’s action. An analysis of the pattern of frequency across the two players’ hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar.Conclusion/SignificanceThis work confirms that people can exploit gaze cues to predict another person’s movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user’s sense of affiliation.
Abstract-Robustness to perturbation has been advocated as a key element to robot control and efforts in that direction are numerous. While in essence these approaches aim at "endowing robots with a flexibility similar to that displayed by humans", few have actually looked at how humans react in the face of fast perturbations. We recorded the kinematic data from human subjects during grasping motions under very fast perturbations. Results show a strong coupling between the reach and grasp components of the task that enables rapid adaptation of the fingers in coordination with the hand posture when the target object is perturbed. We develop a robot controller based on Coupled Dynamical Systems that exploits coupling between two dynamical systems driving the hand and finger motions. This offers a compact encoding for a variety of reach and grasp motions that adapts on-the-fly to perturbations without the need for any re-planning. To validate the model we control the motion of the iCub robot when reaching for different objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.