Autonomous service robots have become a key research topic in robotics, particularly for household chores. A typical home scenario is highly unconstrained and a service robot needs to adapt constantly to new situations. In this paper, we address the problem of autonomous cleaning tasks in uncontrolled environments. In our approach, a human instructor uses kinestethic demonstrations to teach a robot how to perform different cleaning tasks on a table. Then, we use Task Parametrized Gaussian Mixture Models (TP-GMMs) to encode the demonstrations variability, while providing appropriate generalization abilities. TP-GMMs extend Gaussian Mixture Models with an auxiliary set of reference frames, in order to extrapolate the demonstrations to different task parameters such as movement locations, amplitude or orientations. However, the reference frames (that parametrize TP-GMMs) can be very difficult to extract in practice, as it may require segmenting the cluttered images of the working table-top. Instead, in this work the reference frames are automatically extracted from robot camera images, using a deep neural network that was trained during human demonstrations of a cleaning task. This approach has two main benefits: (i) it takes the human completely out of the loop while performing complex cleaning tasks; and (ii) the network is able to identify the specific task to be performed directly from image data, thus also enabling automatic task selection from a set of previously demonstrated tasks. The system was implemented on the iCub humanoid robot. During the tests, the robot was able to successfully clean a table with two different types of dirt (wiping a marker's scribble or sweeping clusters of lentils).
We address the problem of teaching a robot how to autonomously perform table-cleaning tasks in a robust way.In particular, we focus on wiping and sweeping a table with a tool (e.g., a sponge). For the training phase, we use a set of kinestethic demonstrations performed over a table. The recorded 2D table-space trajectories, together with the images acquired by the robot, are used to train a deep convolutional network that automatically learns the parameters of a Gaussian Mixture Model that represents the hand movement. After the learning stage, the network is fed with the current image showing the location/shape of the dirt or stain to clean. The robot is able to perform cleaning arm-movements, obtained through Gaussian Mixture Regression using the mixture parameters provided by the network. Invariance to the robot posture is achieved by applying a plane-projective transformation before inputting the images to the neural network; robustness to illumination changes and other disturbances is increased by considering an augmented data set. This improves the generalization properties of the neural network, enabling for instance its use with the left arm after being trained using trajectories acquired with the right arm. The system was tested on the iCub robot generating a cleaning behaviour similar to the one of human demonstrators.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.