2021
DOI: 10.22541/au.163699841.19031727/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DARPA’s Explainable AI (XAI) program: A retrospective

Abstract: DARPA formulated the Explainable Artificial Intelligence (XAI) program in 2015 with the goal to enable end users to better understand, trust, and effectively manage artificially intelligent systems. In 2017, the four-year XAI research program began. Now, as XAI comes to an end in 2021, it is time to reflect on what succeeded, what failed, and what was learned. This article summarizes the goals, organization, and research progress of the XAI Program.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 6 publications
0
9
0
Order By: Relevance
“…In recent years, “Explainable AI (XAI)” has become a popular research area. The Defense Advanced Research Projects Agency (DARPA) launched the XAI program in 2017, intending to create new or modified ML technologies and produce explainable models that enable users to understand, trust, and effectively manage AI systems [ 78 ]. In 2018, the General Data Protection Regulation (GDPR) of the European Union also stated that data subjects have a right to request explanations about automated decisions made by algorithms [ 79 ].…”
Section: Discussionmentioning
confidence: 99%
“…In recent years, “Explainable AI (XAI)” has become a popular research area. The Defense Advanced Research Projects Agency (DARPA) launched the XAI program in 2017, intending to create new or modified ML technologies and produce explainable models that enable users to understand, trust, and effectively manage AI systems [ 78 ]. In 2018, the General Data Protection Regulation (GDPR) of the European Union also stated that data subjects have a right to request explanations about automated decisions made by algorithms [ 79 ].…”
Section: Discussionmentioning
confidence: 99%
“…As a consequence, with the advent of DL, there are increasing discussions in the field about the relationship between ML model complexity and interpretability and the tendency to use models that are too complicated for prediction tasks at hand ( Rudin, 2019 ). Furthermore, increasing attention is paid to explainable ML ( Belle and Papantonis, 2021 ; Rodríguez-Pérez and Bajorath, 2021a ) and the overarching area of explainable AI (XAI) ( Gunning et al., 2019 , 2021 ; Jiménez-Luna et al., 2020 ; Xu et al., 2019 ). XAI refers to different categories of computational approaches for rationalizing ML models and their decisions in different areas of basic and applied research ( Gunning et al., 2019 ; Jiménez-Luna et al., 2020 ; Xu et al., 2019 ) as well as in scientific teaching ( Clancey and Hoffman, 2021 ).…”
Section: Introductionmentioning
confidence: 99%
“…Explanation methods are equally relevant for classification and regression models ( Letzgus et al., 2021 ; Rodríguez-Pérez and Bajorath, 2020a ). Conceptually different XAI approaches include methods for feature weighting or attribution, causal methods, counterfactuals and contrastive explanations, transparent probabilistic models, or graph convolution analysis methods ( Gunning et al., 2021 ; Jiménez-Luna et al., 2020 ). In addition, local approximation models have been introduced for instance-based explanations of decisions by complex black box models ( Ribeiro et al., 2016 ; Lundberg and Lee, 2017 ).…”
Section: Introductionmentioning
confidence: 99%
“…The need to transform black-box decisions into transparent decisions for human decision-makers has led to a new field of study known as explainable AI (XAI) 1 [11]. Many XAI techniques are found in the literature to make ML systems explainable.…”
Section: Introductionmentioning
confidence: 99%