2020
DOI: 10.24251/hicss.2020.120
|View full text |Cite
|
Sign up to set email alerts
|

Model Interpretation and Explainability towards Creating Transparency in Prediction Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…XAI offers a paradigm shift towards interpretability and explainability required in many fields utilizing ML and AI. We have introduced a taxonomy classifying two unique instances of XAI, 'dynamic' and 'static' cases [10], formulated harmony as a measure of the distance between explanations of these cases (Cosine and Jaccard Similarity), and employed a perturbation-based algorithm 1 to systematically quantify it on several models, metrics, and datasets.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…XAI offers a paradigm shift towards interpretability and explainability required in many fields utilizing ML and AI. We have introduced a taxonomy classifying two unique instances of XAI, 'dynamic' and 'static' cases [10], formulated harmony as a measure of the distance between explanations of these cases (Cosine and Jaccard Similarity), and employed a perturbation-based algorithm 1 to systematically quantify it on several models, metrics, and datasets.…”
Section: Discussionmentioning
confidence: 99%
“…Previous work [10] explored the taxonomy noted above to analyze how well measures of feature importance hold up under 'what-if' perturbations. We extend this work, not in breadth, but depth, offering a framework to systematically quantify the similarity between the two.…”
Section: Related Workmentioning
confidence: 99%
“…Notably, some contrast exists between explainable and accountable AI, c.f. [5] [12]. As noted in [13], not all AI solutions need to be explainable; for example, 1) trusted and trained algorithms are only useable within specified operating conditions and 2) wellstudied conditions, and/or 3) no significant consequences for unacceptable results.…”
Section: Explainable Aimentioning
confidence: 99%
“…Thus, accountable AI aims to develop trusted AI agents with known bounds of performance (akin to how decisions made by service dogs are unexplainable, but the dogs can be trusted) [5]. When humaninterpretable understanding of black-box decision making is needed [12], XAI attempts to provide explainable interpretations for AI models throughout their operations Prior work in the area of explaining complex algorithms includes neural network rule extraction methods, see [14], which began in the 1980s and 1990s by creating tree-based representations of neural network decision processes. However, as conceptualized in Figure 2, XAI extends beyond rule extraction approaches and includes humancomputer interaction concerns as well as userexplanation concerns.…”
Section: Explainable Aimentioning
confidence: 99%