Recent work in explanation generation for decision making agents has looked at how unexplained behavior of autonomous systems can be understood in terms of differences in the model of the system and the human's understanding of the same, and how the explanation process as a result of this mismatch can be then seen as a process of reconciliation of these models. Existing algorithms in such settings, while having been built on contrastive, selective and social properties of explanations as studied extensively in the psychology literature, have not, to the best of our knowledge, been evaluated in settings with actual humans in the loop. As such, the applicability of such explanations to human-AI and human-robot interactions remains suspect. In this paper, we set out to evaluate these explanation generation algorithms in a series of studies in a mock search and rescue scenario with an internal semi-autonomous robot and an external human commander. We demonstrate to what extent the properties of these algorithms hold as they are evaluated by humans, and how the dynamics of trust between the human and the robot evolve during the process of these interactions.From the perspective of planning and decision making, the notion of explanations of the deliberative process of an AI-based system was first explored extensively in the context of expert systems [24]. Similar techniques have been looked at for explanations in case based planning systems [16,28] and in interactive planning [26]
Detection of text from documents in which text is embedded in complex colored document images is a very challenging problem. There are a lot of potential uses of text extraction in image searching, archiving documents etc. In this paper, we propose a simple edge based feature to perform this task. It aims at detecting textual regions from the document and separating it from the graphics portion. The algorithm is based on the sharp edges of the characters which are missing in images. We find these edges and use them to classify text from images. This edge information can also be used for other image interpretation tasks.
This paper studies how a domain-independent planner and combinatorial search can be employed to play AngryBirds, a well established AI challenge problem. To model the game, we use PDDL+, a planning language for mixed discrete/continuous domains that supports durative processes and exogenous events. The paper describes the PDDL+ model and identifies key design decisions that reduce the problem complexity. In addition, we propose several domain-specific enhancements including heuristics and a search technique similar to preferred operators. Together, they alleviate the complexity of combinatorial search. We evaluate our approach by comparing its performance with dedicated domain-specific solvers on a range of Angry Birds levels. The results show that our performance is on par with these domain-specific approaches in most levels, even without using our domain-specific search enhancements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.