Abstract-One of the long-term challenges of programming by demonstration is achieving generality, i.e. automatically adapting the reproduced behavior to novel situations. A common approach for achieving generality is to learn parameterizable skills from multiple demonstrations for different situations. In this paper, we generalize recent approaches on learning parameterizable skills based on dynamical movement primitives (DMPs), such that task parameters are also passed as inputs to the function approximator of the DMP. This leads to a more general, flexible, and compact representation of parameterizable skills, as demonstrated by our empirical evaluation on the iCub and Meka humanoid robots.
SummaryIn recent years the Robot Operating System (Quigley et al. 2009) (ROS) has become the 'de facto' standard framework for robotics software development. The ros_control framework provides the capability to implement and manage robot controllers with a focus on both real-time performance and sharing of controllers in a robot-agnostic way. The primary motivation for a sepate robot-control framework is the lack of realtimesafe communication layer in ROS. Furthermore, the framework implements solutions for controller-lifecycle and hardware resource management as well as abstractions on hardware interfaces with minimal assumptions on hardware or operating system. The clear, modular design of ros_control makes it ideal for both research and industrial use and has indeed seen many such applications to date. The idea of ros_control originates from the pr2_controller_manager framework specific to the PR2 robot but ros_control is fully robot-agnostic. Controllers expose standard ROS interfaces for out-of-the box 3rd party solutions to robotics problems like manipulation path planning (MoveIt! (Chitta, Sucan, and Cousins 2012)) and autonomous navigation (the ROS navigation stack). Hence, a robot made up of a mobile base and an arm that support ros_control doesn't need any additional code to be written, only a few controller configuration files and it is ready to navigate autonomously and do path planning for the arm. ros_control also provides several libraries to support writing custom controllers.
In contexts where robots share their workspace with humans, safety is of utmost importance. Consequently, in recent years, a big impulse has been given to the design of human-friendly robots by involving both mechanical and control design aspects. Regarding controller design, this often involves introducing compliance and ensuring asymptotic stability using an interaction control scheme and passivity theory. Moreover, when human operators physically interact with the robot during work, strict safety measures become necessary with some of these including power and force limitations. In this letter, a novel impedance control technique for collaborative robots is presented. The featured controller allows a safe human-robot interaction through energy and power limitations, assuring passivity through energy tanks. The proposed controller is evaluated with a KUKA LWR 4+ arm in a comanipulation environment.
In co-manipulation, humans and robots solve manipulation tasks together. Virtual guides are important tools for co-manipulation, as they constrain the movement of the robot to avoid undesirable effects, such as collisions with the environment. Defining virtual guides is often a laborious task requiring expert knowledge. This restricts the usefulness of virtual guides in environments where new tasks may need to be solved, or where multiple tasks need to be solved sequentially, but in an unknown order. To this end, we propose a framework for multiple probabilistic virtual guides, and demonstrate a concrete implementation of such guides using kinesthetic teaching and Gaussian mixture models. Our approach enables non-expert users to design virtual guides through demonstration. Also, they may demonstrate novel guides, even if already known guides are active. Finally, users are able to intuitively select the appropriate guide from a set of guides through physical interaction with the robot. We evaluate our approach in a pick-and-place task, where users are to place objects at one of several positions in a cupboard.
Virtual guiding fixtures constrain the movements of a robot to task-relevant trajectories, and have been successfully applied to, for instance, surgical and manufacturing tasks. Whereas previous work has considered guiding fixtures for single tasks, in this paper we propose a library of guiding fixtures for multiple tasks, and propose methods for 1) Creating and adding guides based on machine learning; 2) Selecting guides on-line based on probabilistic implementation of guiding fixtures; 3) Refining existing guides based on an incremental learning method. We demonstrate in an industrial task that a library of guiding fixtures provides an intuitive haptic interface for joint human-robot completion of tasks, and improves performance in terms of task execution time, mental workload and errors.
In human-robot comanipulation, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, virtual guide's construction often requires expert knowledge and modeling of the task which restricts the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guide's programming, we present a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and Akima splines. Thanks to this approach, the worker is able to locally modify the guides while being assisted by them, increasing the intuitiveness and naturalness of the process. Finally, we evaluate our approach in a simulated sanding task with a collaborative robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.