2019
DOI: 10.1145/3359158
|View full text |Cite
|
Sign up to set email alerts
|

Dissonance Between Human and Machine Understanding

Abstract: Complex machine learning models are deployed in several critical domains including healthcare and autonomous vehicles nowadays, albeit as functional blackboxes. Consequently, there has been a recent surge in interpreting decisions of such complex models in order to explain their actions to humans. Models which correspond to human interpretation of a task are more desirable in certain contexts and can help attribute liability, build trust, expose biases and in turn build better models. It is therefore crucial t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 58 publications
(41 citation statements)
references
References 53 publications
(52 reference statements)
0
41
0
Order By: Relevance
“…Another limitation is that for computing human MEPIs in the case of Crop and Combined, the void image begins with a central pixel, which may be far from the relevant region for classification. Refinements of the human MEPI framework would be interesting to explore in the future, for example to replace Crop with a more higher-level component-based analysis [31].…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Another limitation is that for computing human MEPIs in the case of Crop and Combined, the void image begins with a central pixel, which may be far from the relevant region for classification. Refinements of the human MEPI framework would be interesting to explore in the future, for example to replace Crop with a more higher-level component-based analysis [31].…”
Section: Discussionmentioning
confidence: 99%
“…It would further be interesting to explore how models trained in such a manner perform in more traditional classification metrics -error rates, precision, etc. -on the original input images, as well as whether or not they might help to address the observed lack of robustness of state-of-the-art DNNs in the presence of noisy [2,3,14,22] or incomplete information [12,19,24,27,28,31], or their lack of generalisation [7], or their bias towards texture [6]. It is important to note that the entropy of an image may increase as certain distortions of practical interest are intensified: more suitable general measures of robustness in such settings are left to be explored.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations