Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments.
Complex contact-rich insertion is a ubiquitous robotic manipulation skill and usually involves nonlinear and low-clearance insertion trajectories as well as varying force requirements. A hybrid trajectory and force learning framework can be utilized to generate high-quality trajectories by imitation learning and find suitable force control policies efficiently by reinforcement learning. However, with the mentioned approach, many human demonstrations are necessary to learn several tasks even when those tasks require topologically similar trajectories. Therefore, to reduce human repetitive teaching efforts for new tasks, we present an adaptive imitation framework for robot manipulation. The main contribution of this work is the development of a framework that introduces dynamic movement primitives into a hybrid trajectory and force learning framework to learn a specific class of complex contact-rich insertion tasks based on the trajectory profile of a single task instance belonging to the task class. Through experimental evaluations, we validate that the proposed framework is sample efficient, safer, and generalizes better at learning complex contact-rich insertion tasks on both simulation environments and on real hardware.
Complex assembly tasks involve nonlinear and low-clearance insertion trajectories with varying contact forces at different stages. For a robot to solve these tasks, it requires a precise and adaptive controller which conventional force control methods can not provide. Imitation learning is a promising method for learning controllers that can solve the nonlinear trajectories from human demonstrations without needing to explicitly program them into the robot. However, the force profiles obtain from human demonstration via tele-operation tend to be sub-optimal for complex assembly tasks, thus it is undesirable to imitate such force profiles. Reinforcement learning learns adaptive control policies through interactions with the environment but struggles with low sample efficiency and equipment tear and wear in the physical world. To address these problems, we present a combined learning-based framework to solve complex robotic assembly tasks from human demonstrations via hybrid trajectory learning and force learning. The main contribution of this work is the development of a framework that combines imitation learning, to learn the nominal motion trajectory, with a reinforcement learning-based force control scheme to learn an optimal force control policy, which can satisfy the nominal trajectory while adapting to the force requirement of the assembly task. To further improve the imitation learning part, we develop a hierarchical architecture, following the idea of goal-conditioned imitation learning, to generate the trajectory learning policy on the skill level offline. Through experimental validations, we corroborate that the proposed learning-based framework can generate high-quality trajectories and find suitable force control policies which adapt to the tasks' force requirements more efficiently.
Factory automation robot systems often depend on specially-made jigs that precisely position each part, which increases the system's cost and limits flexibility. We propose a method to determine the 3D pose of an object with high precision and confidence, using only parallel robotic grippers and no parts-specific jigs. Our method automatically generates a sequence of actions that ensures that the real-world position of the physical object matches the system's assumed pose to sub-mm precision. Furthermore, we propose the use of "extrinsic" actions, which use gravity, the environment and the gripper geometry to significantly reduce or even eliminate the uncertainty about an object's pose. We show in simulated and real-robot experiments that our method outperforms our previous work, at success rates over 95%. The source code was made public at github.com/omron-sinicx.
Robotic assembly tasks involve complex and lowclearance insertion trajectories with varying contact forces at different stages. While the nominal motion trajectory can be easily obtained from human demonstrations through kinesthetic teaching, teleoperation, simulation, among other methods, the force profile is harder to obtain especially when a real robot is unavailable. It is difficult to obtain a realistic force profile in simulation even with physics engines. Such simulated force profiles tend to be unsuitable for the actual robotic assembly due to the reality gap and uncertainty in the assembly process.To address this problem, we present a combined learning-based framework to imitate human assembly skills through hybrid trajectory learning and force learning. The main contribution of this work is the development of a framework that combines hierarchical imitation learning, to learn the nominal motion trajectory, with a reinforcement learning-based force control scheme to learn an optimal force control policy, that can satisfy the nominal trajectory while adapting to the force requirement of the assembly task. To further improve the imitation learning part, we develop a hierarchical architecture, following the idea of goalconditioned imitation learning, to generate the trajectory learning policy on the skill level offline. Through experimental validations, we corroborate that the proposed learning-based framework is robust to uncertainty in the assembly task, can generate highquality trajectories, and can find suitable force control policies, which adapt to the task's force requirements more efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.