Abstract-Robots are becoming safe and smart enough to work alongside people not only on manufacturing production lines, but also in spaces such as houses, museums or hospitals. This can be significantly exploited in situations where a human needs the help of another person to perform a task, because a robot may take the role of the helper. In this sense, a human and the robotic assistant may cooperatively carry out a variety of tasks, therefore requiring the robot to communicate with the person, understand his/her needs and behave accordingly. To achieve this, we propose a framework for a user to teach a robot collaborative skills from demonstrations. We mainly focus on tasks involving physical contact with the user, where not only position, but also force sensing and compliance become highly relevant. Specifically, we present an approach that combines probabilistic learning, dynamical systems and stiffness estimation, to encode the robot behavior along the task. Our method allows a robot to learn not only trajectory following skills, but also impedance behaviors. To show the functionality and flexibility of our approach, two different testbeds are used: a transportation task and a collaborative table assembly.
This paper proposes an end-to-end learning from demonstration framework for teaching force-based manipulation tasks to robots. The strengths of this work are manyfold. First, we deal with the problem of learning through force perceptions exclusively. Second, we propose to exploit haptic feedback both as a means for improving teacher demonstrations and as a human–robot interaction tool, establishing a bidirectional communication channel between the teacher and the robot, in contrast to the works using kinesthetic teaching. Third, we address the well-known what to imitate? problem from a different point of view, based on the mutual information between perceptions and actions. Lastly, the teacher’s demonstrations are encoded using a Hidden Markov Model, and the robot execution phase is developed by implementing a modified version of Gaussian Mixture Regression that uses implicit temporal information from the probabilistic model, needed when tackling tasks with ambiguous perceptions. Experimental results show that the robot is able to learn and reproduce two different manipulation tasks, with a performance comparable to the teacher’s one.Peer ReviewedPostprint (author’s final draft post-refereeing
A systematic overview on the subject of model-based manipulation planning of deformable objects is presented. Existing modelling techniques of volumetric, planar and linear deformable objects are described, emphasizing the different types of deformation. Planning strategies are categorized according to the type of manipulation goal: path planning, folding/unfolding, topology modifications and assembly. Most current contributions fit naturally into these categories, and thus the presented algorithms constitute an adequate basis for future developments.
A systematic overview on the subject of assembly sequencing is presented. Sequencing lies at the core of assembly planning, and variants include finding a feasible sequence -respecting the precedence constraints between the assembly operations-, or determining an optimal one according to one or several operational criteria. The different ways of representing the space of feasible assembly sequences are described, as well as the search and optimization algorithms that can be used. Geometry plays a fundamental role in devising the precedence constraints between assembly operations, and this is the subject of the second part of the survey, which treats also motion in contact in the context of the actual performance of assembly operations.
Abstract-Robot learning from demonstration faces new challenges when applied to tasks in which forces play a key role. Pouring liquid from a bottle into a glass is one such task, where not just a motion with a certain force profile needs to be learned, but the motion is subtly conditioned by the amount of liquid in the bottle. In this paper, the pouring skill is taught to a robot as follows. In a training phase, the human teleoperates the robot using a haptic device, and data from the demonstrations are statistically encoded by a parametric hidden Markov model, which compactly encapsulates the relation between the task parameter (dependent on the bottle weight) and the force-torque traces. Gaussian mixture regression is then used at the reproduction stage for retrieving the suitable robot actions based on the force perceptions. Computational and experimental results show that the robot is able to learn to pour drinks using the proposed framework, outperforming other approaches such as the classical hidden Markov models in that it requires less training, yields more compact encodings and shows better generalization capabilities.
Abstract-A learning framework with a bidirectional communication channel is proposed, where a human performs several demonstrations of a task using a haptic device (providing him/her with force-torque feedback) while a robot captures these executions using only its force-based perceptive system. Our work departs from the usual approaches to learning by demonstration in that the robot has to execute the task blindly, relying only on force-torque perceptions, and, more essential, we address goal-driven manipulation tasks with multiple solution trajectories, whereas most works tackle tasks that can be learned by just finding a generalization at the trajectory level. To cope with these multiple-solution tasks, in our framework demonstrations are represented by means of a Hidden Markov Model (HMM) and the robot reproduction of the task is performed using a modified version of Gaussian Mixture Regression that incorporates temporal information (GMRa) through the forward variable of the HMM. Also, we exploit the haptic device as a teaching and communication tool in a human-robot interaction context, as an alternative to kinesthetic-based teaching systems. Results show that the robot is able to learn a container-emptying task relying only on force-based perceptions and to achieve the goal from several non-trained initial conditions.
Assistive robots need to be able to perform a large number of tasks that imply some type of cloth manipulation. These tasks include domestic chores such as laundry handling or bed-making, among others, as well as dressing assistance to disabled users. Due to the deformable nature of fabrics, this manipulation requires a strong perceptual feedback. Common perceptual skills that enable robots to complete their cloth manipulation tasks are reviewed here, mainly relying on vision, but also resorting to touch and force. The use of such basic skills is then examined in the context of the different cloth manipulation tasks, be them garment-only applications in the line of performing domestic chores, or involving physical contact with a human as in dressing assistance. Keywords Robotic assistance • Robotic cloth manipulation • Perception of cloth 1 IntroductionRobots perform quite competently nowadays in structured environments, tackling hard tasks under tough working conditions, and even handling incidences that could be anticipated. The requirements posed by assistive settings, however, point in a quite different direction. The top concerns are no longer precision and repeatability, but rather a high degree of adaptability to varying ambient conditions, ability to learn, multimodal human-robot interaction capabilities, and integrated safety. It is obvious that robots cannot replace humans entirely in assistive environments, probably they shouldn't either. Nonetheless, it is desirable that they be able to perform a variety of tasks within domestic and service environments such as hospitals or care homes. Such tasks fall under what is commonly known as domestic chores, whose fulfillment ensures not just tidy homes, but the very proper life conditions of physically disabled people, in acceptable standards of human dignity and coverage of needs. Human companionship cannot be obviated, but the burden of associated duties without added value in personal interchange can be certainly alleviated with a robotic helper.Among these tasks, those that involve the manipulation of textile items, including of course all types of garments, but also other categories of fabric-made objects such as bed-and tablecloth, curtains, towels, kitchen rags and dishcloth, etc. have to be highlighted. The omnipresence of such items in human daily environments and the importance of handling them correctly are evident. The means of providing assistive robots with the necessary abilities to perform cloth manipulation are not so clear. Ideally one would like to replicate the human proficiency in manipulating clothes, but the robotic state-of-the-art is still far from achieving the needed perceptual skills and the required dexterity. Cloth perception and manipulation
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.