Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.
Abstract-An autonomous robot system that is to act in a real-world environment is faced with the problem of having to deal with a high degree of both complexity as well as uncertainty. Therefore, robots should be equipped with a knowledge representation system that is able to soundly handle both aspects. In this paper, we thus introduce an architecture that provides a coupling between plan-based robot controllers and a probabilistic knowledge representation system based on recent developments in statistical relational learning, which possesses the required level of expressiveness and generality. We outline possible applications of the corresponding models in the context of robot control, discussing suitable representation formalisms, inference and learning methods as well as transparent extensions of a robot planning language that allow robot control programs to soundly integrate the results of probabilistic inference into their plan generation process.
Abstract-In the context of robotic assistants in human everyday environments, pick and place tasks are beginning to be competently solved at the technical level. The question of where to place objects or where to pick them up from, among other higher-level reasoning tasks, is therefore gaining practical relevance. In this work, we consider the problem of identifying the organizational structure within an environment, i.e. the problem of determining organizational principles that would allow a robot to infer where to best place a particular, previously unseen object or where to reasonably search for a particular type of object given past observations about the allocation of objects to locations in the environment. This problem can be reasonably formulated as a classification task. We claim that organizational principles are governed by the notion of similarity and provide an empirical analysis of the importance of various features in datasets describing the organizational structure of kitchens. For the aforementioned classification tasks, we compare standard classification methods, reaching average accuracies of at least 79% in all scenarios. We thereby show that, in particular, ontology-based similarity measures are well-suited as highly discriminative features. We demonstrate the use of learned models of organizational principles in a kitchen environment on a real robot system, where the robot identifies a newly acquired item, determines a suitable location and then stores the item accordingly.
This paper introduces the Assistive Kitchen as a comprehensive demonstration and challenge scenario for technical cognitive systems. We describe its hardware and software infrastructure. Within the Assistive Kitchen application, we select particular domain activities as research subjects and identify the cognitive capabilities needed for perceiving, interpreting, analyzing, and executing these activities as research foci. We conclude by outlining open research issues that need to be solved to realize the scenarios successfully.
We propose automated probabilistic models of everyday activities (AM-EvA) as a novel technical means for the perception, interpretation, and analysis of everyday manipulation tasks and activities of daily life. AM-EvAs are detailed, comprehensive models describing human actions at various levels of abstraction from raw poses and trajectories to motions, actions and activities. They integrate several kinds of action models in a common, knowledge-based framework to combine observations of human activities with a-priori knowledge about actions. AM-EvAs enable robots and technical systems to analyze actions in the complete situation and activity context. They make the classification and assessment of actions and situations objective and can justify the probabilistic interpretation with respect to the activities the concepts have been learned from. AM-EvAs allow to analyze and compare the way humans perform actions which can help with autonomy assessment and diagnosis. We describe in this paper the concept and implementation of the AM-EvA system and show example results from the observation and analysis of table-setting episodes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.