A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by distinctions, such as how they represent knowledge and if and how they reason with heuristic guidance. Instances in this multidimensional space corresponding to existing systems do not support all of the desired explanation generation capabilities for robots. We seek to address this limitation by thoroughly understanding the range of explanation generation capabilities and the interplay between the distinctions that characterize them. Towards this objective, this paper first specifies three fundamental distinctions that can be used to characterize many existing explanation generation systems. We explore and understand the effects of these distinctions by comparing the capabilities of two systems that differ substantially along these axes, using execution scenarios involving a robot waiter assisting in seating people and delivering orders in a restaurant. The second part of the paper uses this study to argue that the desired explanation generation capabilities corresponding to these three distinctions can mostly be achieved by exploiting the complementary strengths of the two systems that were explored. This is followed by a discussion of the capabilities related to other major distinctions to provide detailed recommendations for developing an explanation generation system for robots.
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations, and partial descriptions of domain objects and events extracted from sensor observations. Existing explanation generation systems are based on ideas drawn from two broad classes of systems, and do not support all the desired explanation generation capabilities for robots. The objective of this paper is to first compare the explanation generation capabilities of a state of the art system from each of these two classes, using execution scenarios of a robot waiter assisting in a restaurant. Specifically, we investigate KRASP, a system based on the declarative language Answer Set Prolog, which uses an elaborate system description and observations of system behavior to explain unexpected observations and partial descriptions. We also explore UMBRA, an architecture that provides explanations using a weaker system description, a heuristic representation of past experience, and other heuristics for selectively and incrementally searching through relevant ground literals. Based on this study, this paper identifies some key criteria, and provides some recommendations, for developing an explanation generation system for robots that exploits the complementary strengths of the two classes of explanation generation systems. CCS Concepts•Computing methodologies → Knowledge representation and reasoning; Cognitive robotics; Reasoning about belief and knowledge; Causal reasoning and diagnostics; Nonmonotonic, default reasoning and belief revision; Logic programming and answer set programming;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.