In this paper we report on a recent public experiment that shows two robots making pancakes using web instructions. In the experiment, the robots retrieve instructions for making pancakes from the World Wide Web and generate robot action plans from the instructions. This task is jointly performed by two autonomous robots: The first robot opens and closes cupboards and drawers, takes a pancake mix from the refrigerator, and hands it to the robot B. The second robot cooks and flips the pancakes, and then delivers them back to the first robot. While the robot plans in the scenario are all percept-guided, they are also limited in different ways and rely on manually implemented sub-plans for parts of the task. We will thus discuss the potential of the underlying technologies as well as the research challenges raised by the experiment.
This article investigates methods for achieving more general manipulation capabilities for mobile manipulation platforms, which produce legible behavior in human living environments. To achieve generality and legibility, we combine two control mechanisms. First of all, experienceand observation-based learning of skills is applied to routine tasks, so that the repetitive and stereotypical character of everyday activity is exploited. Second, we use planning, reasoning, and search for novel tasks which have no stereotypical solution. We apply these ideas to the learning and use of action-related places, to the model-based visual recognition and localization of objects, and the learning and application of reaching strategies and motions from humans. We demonstrate the integration of these mechanisms into a single low-level control system for autonomous manipulation platforms.
Abstract-Better sensing is crucial to improve robotic grasping and manipulation. Most robots currently have very limited perception in their manipulators, typically only fingertip position and velocity. Additional sensors make richer interactions with the objects possible. In this paper, we present a versatile, robust and low cost sensor for robot fingertips, that can improve robotic grasping and manipulation in several ways: 3D reconstruction of the shape of objects, material surface classification, and object slip detection. We extended TUMRosie, our robot for mobile manipulation, with fingertip sensors on its humanoid robotic hand, and show the advantages of the fingertip sensor integrated in our robot system.
This paper introduces the Assistive Kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the Assistive Kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.