2006
DOI: 10.21236/ada459166
|View full text |Cite
|
Sign up to set email alerts
|

Building Explainable Artificial Intelligence Systems

Abstract: As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems are not modular and not portable; they are tied to a particular AI system. In this paper, we present a modular and gene… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0
2

Year Published

2006
2006
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(54 citation statements)
references
References 5 publications
0
49
0
2
Order By: Relevance
“…Early textual explanation models are applied to medical images [25] and developed as a feedback for teaching programs [12,27,3]. These systems are mainly template based.…”
Section: Related Workmentioning
confidence: 99%
“…Early textual explanation models are applied to medical images [25] and developed as a feedback for teaching programs [12,27,3]. These systems are mainly template based.…”
Section: Related Workmentioning
confidence: 99%
“…Broadly, this principle is in line with the considerable body of research in providing explanations in intelligent systems. (See, for example, the earliest expert systems that explained their reasoning in terms of the chain of rules used [Clancey 1983] to more recent work in conjunction with recommenders [Glass et al 2008], architectures for explainable AI [Core et al 2006], and the engineering of service-oriented adaptation [Koidl and Conlan 2008].) That work confirms that some representation and reasoning approaches are easier for people to understand.…”
Section: Principles For Creating Scrutable User Modelsmentioning
confidence: 77%
“…Hence, while finding an optimal deceptive policy, or enumerating all optimal deceptive policies, may be sufficient in order to deceive an adversary, analyzing and learning from deceptive behavior in one scenario in order to determine deceptive behavior in a similar scenario would require us to describe the set of deceptive policies in understandable terms. Such a question broadly falls within the research effort on explainable artificial intelligence [51]- [53].…”
Section: Discussionmentioning
confidence: 99%