Dynamic movement primitives (DMPs) were proposed as an efficient way for learning and control of complex robot behaviors. They can be used to represent point-to-point and periodic movements and can be applied in Cartesian or in joint space. One problem that arises when DMPs are used to define control policies in Cartesian space is that there exists no minimal, singularity-free representation of orientation. In this paper we show how dynamic movement primitives can be defined for non minimal, singularity free representations of orientation, such as rotation matrices and quaternions. All of the advantages of DMPs, including ease of learning, the ability to include coupling terms, and scale and temporal invariance, can be adopted in our formulation. We have also proposed a new phase stopping mechanism to ensure full movement reproduction in case of perturbations.
Abstract-The framework of dynamic movement primitives contains many favorable properties for the execution of robotic trajectories, such as indirect dependency on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feedforward fashion and thus modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of dynamic movement primitives, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.
This investigation was designed to (a) develop an individualized mechanical model for measuring aerodynamic drag (F(d) ) while ski racing through multiple gates, (b) estimate energy dissipation (E(d) ) caused by F(d) and compare this to the total energy loss (E(t) ), and (c) investigate the relative contribution of E(d) /E(t) to performance during giant slalom skiing (GS). Nine elite skiers were monitored in different positions and with different wind velocities in a wind tunnel, as well as during GS and straight downhill skiing employing a Global Navigation Satellite System. On the basis of the wind tunnel measurements, a linear regression model of drag coefficient multiplied by cross-sectional area as a function of shoulder height was established for each skier (r > 0.94, all P < 0.001). Skiing velocity, F(d) , E(t) , and E(d) per GS turn were 15-21 m/s, 20-60 N, -11 to -5 kJ, and -2.3 to -0.5 kJ, respectively. E(d) /E(t) ranged from ∼5% to 28% and the relationship between E(t) /v(in) and E(d) was r = -0.12 (all NS). In conclusion, (a) F(d) during alpine skiing was calculated by mechanical modeling, (b) E(d) made a relatively small contribution to E(t) , and (c) higher relative E(d) was correlated to better performance in elite GS skiers, suggesting that reducing ski-snow friction can improve this performance.
The paper deals with kinematic control algorithms for on-line obstacle avoidance which allow a kinematically redundant manipulator to move in an unstructured environment without colliding with obstacles. The presented approach is based on the redundancy resolution at the velocity level. The primary task is determined by the end-effector trajectories and for the obstacle avoidance the internal motion of the manipulator is used. The obstacle avoiding motion is defined in onedimensional operational space and hence, the system has less singularities what makes the implementation easiel: Instead of the exact pseudoinverse solution we propose an approximate one which is computationally more eficient and allows also to consider many simultaneously active obstacles without any problems. The fast cycle times of the numerical implementation enable to use the algorithm in real-time control. For illustration some simulation results of highly redundant planar manipulator moving in an unstructured and time-varying environment and experimental results of a four link planar manipulator are given.
In this paper we propose a novel approach for intuitive and natural physical human-robot interaction in cooperative tasks. Through initial learning by demonstration, robot behavior naturally evolves into a cooperative task, where the human co-worker is allowed to modify both the spatial course of motion as well as the speed of execution at any stage. The main feature of the proposed adaptation scheme is that the robot adjusts its stiffness in path operational space, defined with a Frenet-Serret frame. Furthermore, the required dynamic capabilities of the robot are obtained by decoupling the robot dynamics in operational space, which is attached to the desired trajectory. Speed-scaled dynamic motion primitives are applied for the underlying task representation. The combination allows a human co-worker in a cooperative task to be less precise in parts of the task that require high precision, as the precision aspect is learned and provided by the robot. The user can also freely change the speed and/or the trajectory by simply applying force to the robot. The proposed scheme was experimentally validated on three illustrative tasks. The first task demonstrates novel twostage learning by demonstration, where the spatial part of the trajectory is demonstrated independently from the velocity part. The second task shows how parts of the trajectory can be rapidly and significantly changed in one execution. The final experiment shows two Kuka LWR-4 robots in a bi-manual setting cooperating with a human while carrying an object.
A typical robot assembly operation involves contacts with the parts of the product to be assembled and consequently requires the knowledge of not only position and orientation trajectories but also the accompanying force-torque profiles for successful performance. To learn the execution of assembly operations even when the geometry of the product varies across task executions, the robot needs to be able to adapt its motion based on a parametric description of the current task condition, which is usually provided by geometrical properties of the parts involved in the assembly. In our previous work we showed how positional control policies can be generalized to different task conditions. In this paper we propose a complete methodology to generalize also the orientational trajectories and the accompanying force-torque profiles to compute the necessary control policy for a given condition of the assembly task. Our method is based on statistical generalization of successfully recorded executions at different task conditions, which are acquired by kinesthetic guiding. The parameters that describe the varying task conditions define queries into the recorded training data. To improve the execution of the skill after generalization, we combine the proposed approach with an adaptation method, thus enabling the refinement of the generalized assembly operation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.