In this paper we report on a recent public experiment that shows two robots making pancakes using web instructions. In the experiment, the robots retrieve instructions for making pancakes from the World Wide Web and generate robot action plans from the instructions. This task is jointly performed by two autonomous robots: The first robot opens and closes cupboards and drawers, takes a pancake mix from the refrigerator, and hands it to the robot B. The second robot cooks and flips the pancakes, and then delivers them back to the first robot. While the robot plans in the scenario are all percept-guided, they are also limited in different ways and rely on manually implemented sub-plans for parts of the task. We will thus discuss the potential of the underlying technologies as well as the research challenges raised by the experiment.
This article investigates methods for achieving more general manipulation capabilities for mobile manipulation platforms, which produce legible behavior in human living environments. To achieve generality and legibility, we combine two control mechanisms. First of all, experienceand observation-based learning of skills is applied to routine tasks, so that the repetitive and stereotypical character of everyday activity is exploited. Second, we use planning, reasoning, and search for novel tasks which have no stereotypical solution. We apply these ideas to the learning and use of action-related places, to the model-based visual recognition and localization of objects, and the learning and application of reaching strategies and motions from humans. We demonstrate the integration of these mechanisms into a single low-level control system for autonomous manipulation platforms.
This paper introduces the Assistive Kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the Assistive Kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully.
In this paper we propose a bridge between a symbolic reasoning system and a task function based controller. We suggest to use modular position-and force constraints, which are represented as action-object-object triples on the symbolic side and as task function parameters on the controller side. This description is a considerably more fine-grained interface than what has been seen in high-level robot control systems before. It can preserve the 'null space' of the task and make it available to the control level. We demonstrate how a symbolic description can be translated to a controllevel description that is executable on the robot. We describe the relation to existing robot knowledge bases and indicate information sources for generating constraints on the symbolic side. On the control side we then show how our approach outperforms a traditional controller, by exploiting the task's null space, leading to a significantly extended work space.
Autonomous personal robots are currently being equipped with hands and arms that have kinematic redundancy similar to those of humans. Humans exploit the redundancy in their motor system by optimizing secondary criteria. Tasks which are executed repeatedly lead to movements that are highly optimized over time, which leads to stereotypical  and preplanned  motion patterns. This stereotypical motion can be modeled well with compact models, as has been shown for locomotion . In this paper, we determine compact models for human reaching and obstacle avoidance in everyday manipulation tasks, and port these models to an articulated robot. We acquire compact models by analyzing human reaching data acquired with a magnetic motion tracker with dimensionality reduction and clustering methods. The stereotypical reaching trajectories so acquired are used to train a Dynamic Movement Primitive , which is executed on the robot. This enables the robot not only to follow these trajectories accurately, but also uses the compact model to predict and execute further human trajectories.
Abstract-In this work we propose a method to extract visual features from a tool in a hand of a robot to derive basic properties how to handle this tool correctly. We want to show how a robot can improve its accuracy in certain tasks by a visual exploration of geometric features. We also show methods to extend the proprioception of the robots arm to the new endeffector including the tool. By a combination of 3D and 2D data, it is possible to extract features like geometric edges, flat surfaces and concavities. From those features we can distinguish several classes of objects and make basic measurements of potential contact areas and other properties relevant for performing tasks. We also present a controller that uses the relative position or orientation of such features as constraints for manipulation tasks in the world. Such a controller allows to easily model complex tasks like pancake flipping or sausage fishing. The extension of the proprioception is achieved by a generalized filter setup for a set of force torque sensors, that allows the detection of indirect contacts performed over a tool and extract basic information like the approximated direction from the sensor data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.