2020
DOI: 10.48550/arxiv.2005.02335
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Don't Explain without Verifying Veracity: An Evaluation of Explainable AI with Video Activity Recognition

Mahsan Nourani,
Chiradeep Roy,
Tahrima Rahman
et al.

Abstract: Explainable machine learning and artificial intelligence models have been used to justify a model's decisionmaking process. This added transparency aims to help improve user performance and understanding of the underlying model. However, in practice, explainable systems face many open questions and challenges. Specifically, designers might reduce the complexity of deep learning models in order to provide interpretability. The explanations generated by these simplified models, however, might not accurately just… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 62 publications
0
9
0
Order By: Relevance
“…For example, getting people to provide feedback to an intelligent system to fix errors can amplify user mistrust in the system (Honeycutt, Nourani, and Ragan 2020). More related to this paper, Nourani et al (2020b) found that after observing system's weakly-justified predictions, users tend to disagree with the system even when it is right. In this paper, we explored the interplay of domain expertise and observed performance on user reliance behaviours.…”
Section: Related Workmentioning
confidence: 73%
See 1 more Smart Citation
“…For example, getting people to provide feedback to an intelligent system to fix errors can amplify user mistrust in the system (Honeycutt, Nourani, and Ragan 2020). More related to this paper, Nourani et al (2020b) found that after observing system's weakly-justified predictions, users tend to disagree with the system even when it is right. In this paper, we explored the interplay of domain expertise and observed performance on user reliance behaviours.…”
Section: Related Workmentioning
confidence: 73%
“…There are many suggested trust in automation questionnaires such as (Hoffman et al 2018) that can be used to measure trust explicitly. Some of the implicit trust measurements include checking user agreement with wrong system outputs (Papenmeier, Englebienne, and Seifert 2019;Nourani et al 2020b); repeatedly asking for trust ratings (Yu et al 2017); and measuring user perception of system accuracy as an indication of user trust (Yin, Wortman Vaughan, and Wallach 2019;Nourani et al 2019). In this paper, we measure trust through both implicit and explicit measures.…”
Section: Related Workmentioning
confidence: 99%
“…An explanation that cannot be properly understood by a human has no value and may potentially mislead the user. Indeed, it is essential to provide accurate and understandable explanations as poor explanations can sometimes be even worse than no explanation at all [80] and may also generate undesired bias in the users [81,82]. As a consequence, properly structuring [83] and evaluating the interpretability and effectiveness of explanations requires a deep understanding of the ways in which humans interpret and understand them, while also accounting for the relationship between human understanding and model explanations [84,85].…”
Section: Understanding the Human's Perspective In Explainable Aimentioning
confidence: 99%
“…Hiley et al [21] surveyed a few frameworks that are used to explain action recognition models based on LRP or CAM approaches. Roy et al [36,39] add a probabilistic model on top of the action model to explain the meaning of each sub-action during the inference following Aakur et al [2]. Huang [23] study the effect of motion in action recognition by checking the accuracy drop from frames without using motion.…”
Section: Related Workmentioning
confidence: 99%