Proceedings of the Conference on Fairness, Accountability, and Transparency 2019
DOI: 10.1145/3287560.3287590
|View full text |Cite
|
Sign up to set email alerts
|

On Human Predictions with Explanations and Predictions of Machine Learning Models

Abstract: Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

13
261
4

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 261 publications
(305 citation statements)
references
References 55 publications
13
261
4
Order By: Relevance
“…Multiple studies examined the effect of accuracy information [17,30,32], and found people to increase their trust in the model when high accuracy indicators are displayed, reflected both in subjective reporting and more consistent choices with the model's recommendations. Closest to ours is the work by Lai and Tan [17], where they studied the effect of showing prediction (in contrast to baseline without AI assistance), accuracy and multiple types of explanation for AI assisted decision-making in a deception-detection scenario. They found that all these features increased people's trust, measured as acceptance of the AI's recommendation as the final decision, and also the decision accuracy.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Multiple studies examined the effect of accuracy information [17,30,32], and found people to increase their trust in the model when high accuracy indicators are displayed, reflected both in subjective reporting and more consistent choices with the model's recommendations. Closest to ours is the work by Lai and Tan [17], where they studied the effect of showing prediction (in contrast to baseline without AI assistance), accuracy and multiple types of explanation for AI assisted decision-making in a deception-detection scenario. They found that all these features increased people's trust, measured as acceptance of the AI's recommendation as the final decision, and also the decision accuracy.…”
Section: Related Workmentioning
confidence: 99%
“…This chance number was calculated from the training dataset based on the percentages of people with the corresponding attribute-value earning income above 50K. We multiplied the percentages by 10 and rounded the number since prior work shows that people understand frequencies better than probabilities [17]. For example, in Figure 1, the chance value for occupation indicates that 5 people out of 10 with the occupation of Executive & Managerial have annual income above 50K.…”
Section: Task and Materialsmentioning
confidence: 99%
See 2 more Smart Citations
“…Lai and Tan [35] demonstrated a trade-o between performance and human agency by exposing participants to varying levels of machine assistance (of an SVM) while they were identifying deceptive reviews. ey found that explanations without the suggestion of a label slightly improved human performance.…”
Section: Evaluations Of Saliency Map For Text Based Classi Ersmentioning
confidence: 99%