Close human-robot cooperation is a key enabler for new developments in advanced manufacturing and assistive applications. Close cooperation require robots that can predict human actions and intent, understanding human non-verbal cues. Recent approaches based on neural networks have led to encouraging results in the human action prediction problem both in continuous and discrete spaces. Our approach extends the research in this direction.Our contributions are three-fold. First, we validate the use of gaze and body pose cues as a means of predicting human action through a feature selection method. Next, we address two shortcomings of existing literature: predicting multiple and variable-length action sequences. This is achieved by applying an encoder-decoder recurrent neural network topology in the discrete action prediction problem.In addition, we theoretically demonstrate the importance of predicting multiple action sequences as a means of estimating the stochastic reward in a human robot cooperation scenario.Finally, we show the ability to effectively train the prediction model on an action prediction dataset, involving human motion data, and explore the influence of the model's parameters on its performance.
Humans have the fascinating capacity of processing non-verbal visual cues to understand and anticipate the actions of other humans. This "intention reading" ability is underpinned by shared motor-repertoires and action-models, which we use to interpret the intentions of others as if they were our own.We investigate how the different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body-motion and eye-gaze, acquired in an experimental scenario with an actor interacting with three subjects. From these data, we conducted a human study to analyse the importance of the different non-verbal cues for action perception. As our second contribution, we used the motion/gaze recordings to build a computational model describing the interaction between two persons. As a third contribution, we embedded this model in the controller of an iCub humanoid robot and conducted a second human study, in the same scenario with the robot as an actor, to validate the model's "intention reading" capability.Our results show that it is possible to model (non-verbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of : (i) being able to "read" human action intentions, and (ii) acting in a way that is legible by humans.
639 Int. J. Human. Robot. 2008.05:639-678. Downloaded from www.worldscientific.com by NEW YORK UNIVERSITY on 02/08/15. For personal use only. 640 M. Vukobratović et al.This paper presents a contribution to the study of control law structures and to the selection of relevant sensory information for humanoid robots in situations where dynamic balance is jeopardized. In the example considered, the system first experiences a large disturbance, and then by an appropriate control action resumes a "normal" posture of standing on one leg. In order to examine the control laws used by humans, an experiment was performed in which a human subject was subjected to perturbations and the ensuing reactions were recorded to obtain complete information about the subject's motion and ground reaction force. Then, a humanoid model was advanced with characteristics matching those of the experimental human subject. The whole experiment was simulated so as to achieve a simulated motion that was similar to that of the human test subject. The analysis of the control laws applied, and the behavior of selected ground reference points (ZMP, CMP and CM projection on the ground surface), provided valuable insight into balance strategies that humanoid robots might employ to better mimic the kinetics and kinematics of humans compensating for balance disturbances.a The term "dynamically balanced" is used to refer to the condition where at least one humanoid foot, or terminal link, is flat on the ground and immobile while standih of walking. Int. J. Human. Robot. 2008.05:639-678. Downloaded from www.worldscientific.com by NEW YORK UNIVERSITY on 02/08/15. For personal use only.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.