Abstract:Human-robot collaboration in industrial applications is a challenging robotic task. Human working together with the robot at a workplace to complete a task may create unpredicted events for the robot, as humans can act unpredictably. Humans tend to perform a task in a not fully repetitive manner using their expertise and cognitive capabilities. The traditional robot programming cannot cope with these challenges of human-robot collaboration. In this paper, a framework for robot learning by multiple human demons… Show more
“…To reduce human involvement and increase robustness to uncertainties, the most recent research has been focused on learning assembly skills either from human demonstrations [6] or directly from interactions with the environment [7]. The present research focuses on the latter.…”
Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments.
“…To reduce human involvement and increase robustness to uncertainties, the most recent research has been focused on learning assembly skills either from human demonstrations [6] or directly from interactions with the environment [7]. The present research focuses on the latter.…”
Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments.
“…Given that most current robotic assembly tasks in LfD [ 10 , 11 , 12 ] are demonstrated by human hands, capturing human hand movements makes it a crucial step for robots to understand human intentions. Human hand movements are often treated as trajectories with capturing methods categorized into kinesthetic demonstration [ 11 ], motion-sensor demonstration [ 13 , 14 ], and teleoperated demonstration [ 15 ]. In kinesthetic demonstration, robots are guided by humans directly, without tackling correspondence problems owing to different kinematics and dynamics between each other [ 13 ].…”
Section: Introductionmentioning
confidence: 99%
“…To encode human hand movements with task-oriented models, movement primitives (MPs) [ 18 ] are well-established methods in robotics. Generally, movement primitive learning methods fall into two groups: one is based on probabilistic models [ 11 , 19 , 20 ], the other is based on dynamical systems [ 21 , 22 ]. Probabilistic models commonly take the form of a Hidden Markov Model and Gaussian Mixture Regression (HMM-GMR) [ 19 ], Gaussian Mixture Model and Gaussian Mixture Regression (GMM-GMR) [ 11 ].…”
In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.
“…However, it provides near-optimal solutions, not optimal ones. Calinon et al [15], Ude et al [16], and Kyrarini et al [17] proposed methods to obtain motor skills based on imitation learning. Their motor skills are modeled from human demonstration dataset.…”
We propose a framework based on imitation learning and self-learning to enable robots to learn, improve, and generalize motor skills. The peg-in-hole task is important in manufacturing assembly work. Two motor skills for the peg-in-hole task are targeted: "hole search" and "peg insertion". The robots learn initial motor skills from human demonstrations and then improve and/or generalize them through reinforcement learning (RL). An initial motor skill is represented as a concatenation of the parameters of a hidden Markov model (HMM) and a dynamic movement primitive (DMP) to classify input signals and generate motion trajectories. Reactions are classified as familiar or unfamiliar (i.e., modeled or not modeled), and initial motor skills are improved to solve familiar reactions and generalized to solve unfamiliar reactions. The proposed framework includes processes, algorithms, and reward functions that can be used for various motor skill types. To evaluate our framework, the motor skills were performed using an actual robotic arm and two reward functions for RL. To verify the learning and improving/generalizing processes, we successfully applied our framework to different shapes of pegs and holes. Moreover, the execution time steps and path optimization of RL were evaluated experimentally. of motor skills. However, for acquiring complete motor skills, it has one evident limitation: it does not ensure that robots acquire motor skills that are optimized for their goals (that is, it generally provides near-optimal solutions) [5]. Furthermore, it is not easy for human performers to provide a demonstration dataset that can cover all situations arising during the execution of a motor skill [6]. Nonetheless, these human demonstrations can be used as a solid starting point for robots to acquire motor skills [7]. To obtain optimal motor skills, robots must be able to improve motor skills through self-learning. However, this self-learning is a time-consuming and expensive process in the absence of references. We attempt to obtain these optimal solutions with fewer trials-and-errors by providing near-optimal solutions learned from human demonstrations. In this paper, robots improve motor skills to optimize them-referred to as improvement-and generalize them so that they are widely applicable-known as generalization-through self-learning.The peg-in-hole task has also been addressed through several imitation learning studies [8,9]. However, the peg-in-hole task is not easy to learn with this method alone. The main reason is that it is difficult for human performers to provide a complete demonstration dataset to robots, because it is not feasible to prepare all possible reaction situations. In addition, unintended reaction information may be included in the dataset during the demonstration process. These problems may prevent robots from acquiring the complete motor skills. Thus, initial motor skills are learned to classify reaction force/moment signals and generate reaction motion trajectories from human demonstrations, and thei...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.