Abstract:This paper proposes a method to achieve fast and fluid human–robot interaction by estimating the progress of the movement of the human. The method allows the progress, also referred to as the phase of the movement, to be estimated even when observations of the human are partial and occluded; a problem typically found when using motion capture systems in cluttered environments. By leveraging on the framework of Interaction Probabilistic Movement Primitives, phase estimation makes it possible to classify the hum… Show more
“…Levine et al [123] demonstrated the high potential of deep learning algorithms for automated flexible robotic grasping of different objects in undefined poses. Maeda et al [144] proposed a method to achieve fast and fluid human-robot interaction by estimating the progress of the movement of the human. Their method (Fig.…”
In human-robot collaborative assembly, robots are often required to dynamically change their pre-planned tasks to collaborate with human operators in a shared workspace. However, the robots used today are controlled by pre-generated rigid codes that cannot support effective human-robot collaboration. In response to this need, multi-modal yet symbiotic communication and control methods have been a focus in recent years. These methods include voice processing, gesture recognition, haptic interaction, and brainwave perception. Deep learning is used for classification, recognition and context awareness identification. Within this context, this keynote provides an overview of symbiotic human-robot collaborative assembly and highlights future research directions.
“…Levine et al [123] demonstrated the high potential of deep learning algorithms for automated flexible robotic grasping of different objects in undefined poses. Maeda et al [144] proposed a method to achieve fast and fluid human-robot interaction by estimating the progress of the movement of the human. Their method (Fig.…”
In human-robot collaborative assembly, robots are often required to dynamically change their pre-planned tasks to collaborate with human operators in a shared workspace. However, the robots used today are controlled by pre-generated rigid codes that cannot support effective human-robot collaboration. In response to this need, multi-modal yet symbiotic communication and control methods have been a focus in recent years. These methods include voice processing, gesture recognition, haptic interaction, and brainwave perception. Deep learning is used for classification, recognition and context awareness identification. Within this context, this keynote provides an overview of symbiotic human-robot collaborative assembly and highlights future research directions.
“…In Maeda et al original work [5], each demonstration (which used static observation windows (SOW))was resampled yielding a nominal duration T nom_sow . We adjust the definition of the nominal duration to fit the length of the dynamic observation window (DOW) duration yielding T nom_dow .…”
Section: Phase Estimation With Dynamic Observation Windowsmentioning
confidence: 99%
“…To determine the best phase estimate during test time, we use the single phase temporal model in [5]. A distribution of phase ratios across demonstrations is modeled as a normal distribution and set as the phase prior: α dow ∼ N (µ α dow , σ α dow ).…”
Section: Phase Estimation With Dynamic Observation Windowsmentioning
confidence: 99%
“…Tasks could then be generated as sequences of simple or simultaneously activated skills. The ProMP formulation, like Dynamic Motion Primitives [5] can do temporal and velocity modulation; however unlike ProMPs, DMPs do not address the inverse problem of estimating the phase itself. Basis functions encode positions which are critical for the tractability of interaction primitives since estimating the forcing function of a human (a DMP requirement) is non trivial.…”
Human-robot collaboration is on the rise. Robots need to increasingly improve the efficiency and smoothness with which they assist humans by properly anticipating a human's intention. To do so, prediction models need to increase their accuracy and responsiveness. This work builds on top of Interaction Movement Primitives with phase estimation and re-formulates the framework to use dynamic human-motion observations which constantly update anticipatory motions. The original framework only considers a single fixed-duration static human observation which is used to perform only one anticipatory motion. Dynamic observations, with built-in phase estimation, yield a series of updated robot motion distributions. Co-activation is performed between the existing and newest most probably robot motion distribution. This results in smooth anticipatory robot motions that are highly accurate and with enhanced responsiveness.
“…In our work, each demonstration was resampled yielding a nominal duration T norm . As in [8], we assume that the i th demonstration also has a constant temporal change in relation to the nominal duration and can define a scaling factor in Eqtn. 11 to index all demonstrations by the nominal time index.…”
Recent progress in human-robot collaboration (HRC) makes fast and fluid interactions possible, even when human observations are partial and occluded. Methods like Interaction Probabilistic Movement Primitives (ProMPs) model human Cartesian trajectories through motion capture systems. However, such representation does not properly model tasks where similar motions are used to handle different objects. As such, under current approaches, a robot would not be able to properly adapt its pose and dynamics for proper handling. We propose to integrate the use of Electromyography (EMG) into the Interaction ProMP framework and utilize EMGbased muscular signals to augment the human observation representation. The contribution of our paper is the increased capacity to discern tasks that have similar trajectories but ones in which different tools are utilized and require the robot to adjust its pose for proper handling. Multidimensional Interaction ProMPs are used with an augmented vector that integrates muscle activity. Augmented time-normalized trajectories are used in training to learn correlation parameters and robot motions are predicted by finding a best weight combination and temporal scaling for a task. Collaborative single task scenarios with similar motions but different objects were used and compared. For one experiment only joint angles were recorded, for the other EMG signals were additionally integrated. Task recognition was computed for both tasks. Observation state vectors with augmented EMG signals were able to completely identify differences across tasks, while the baseline method failed every time. Integrating EMG signals into collaborative tasks significantly increases the ability of the system to recognize nuances in the tasks that are otherwise imperceptible, up to 74.6% in our studies. Furthermore, the integration of EMG signals for collaboration also opens the door to a wide class of human-robot physical interactions based on haptic communication that have been largely unexploited in the field. Supplemental information including video, code, and results analysis can be found at [1].
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.