In this paper we report on a recent public experiment that shows two robots making pancakes using web instructions. In the experiment, the robots retrieve instructions for making pancakes from the World Wide Web and generate robot action plans from the instructions. This task is jointly performed by two autonomous robots: The first robot opens and closes cupboards and drawers, takes a pancake mix from the refrigerator, and hands it to the robot B. The second robot cooks and flips the pancakes, and then delivers them back to the first robot. While the robot plans in the scenario are all percept-guided, they are also limited in different ways and rely on manually implemented sub-plans for parts of the task. We will thus discuss the potential of the underlying technologies as well as the research challenges raised by the experiment.
This article investigates methods for achieving more general manipulation capabilities for mobile manipulation platforms, which produce legible behavior in human living environments. To achieve generality and legibility, we combine two control mechanisms. First of all, experienceand observation-based learning of skills is applied to routine tasks, so that the repetitive and stereotypical character of everyday activity is exploited. Second, we use planning, reasoning, and search for novel tasks which have no stereotypical solution. We apply these ideas to the learning and use of action-related places, to the model-based visual recognition and localization of objects, and the learning and application of reaching strategies and motions from humans. We demonstrate the integration of these mechanisms into a single low-level control system for autonomous manipulation platforms.
This paper introduces the Assistive Kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the Assistive Kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully.
In this paper we propose a bridge between a symbolic reasoning system and a task function based controller. We suggest to use modular position-and force constraints, which are represented as action-object-object triples on the symbolic side and as task function parameters on the controller side. This description is a considerably more fine-grained interface than what has been seen in high-level robot control systems before. It can preserve the 'null space' of the task and make it available to the control level. We demonstrate how a symbolic description can be translated to a controllevel description that is executable on the robot. We describe the relation to existing robot knowledge bases and indicate information sources for generating constraints on the symbolic side. On the control side we then show how our approach outperforms a traditional controller, by exploiting the task's null space, leading to a significantly extended work space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.