Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems 2020
DOI: 10.1145/3313831.3376873
|View full text |Cite
|
Sign up to set email alerts
|

"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans

Abstract: To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a training phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
106
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 87 publications
(123 citation statements)
references
References 48 publications
1
106
0
Order By: Relevance
“…Lai and Tan [42] deception detection feature contribution N/A N/A ✓? Lai et al [41] deception detection feature contribution N/A N/A ✓? Cai et al [13] drawing recognition example-based mixed results N/A N/A Yang et al [69] leaf classification example-based N/A N/A ✓ Note: "N/A" means the study does not examine the desideratum.…”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…Lai and Tan [42] deception detection feature contribution N/A N/A ✓? Lai et al [41] deception detection feature contribution N/A N/A ✓? Cai et al [13] drawing recognition example-based mixed results N/A N/A Yang et al [69] leaf classification example-based N/A N/A ✓ Note: "N/A" means the study does not examine the desideratum.…”
Section: Literature Reviewmentioning
confidence: 99%
“…To answer these questions, researchers have been advocating for moving beyond defining what constitutes a "good" explanation using model designer's intuition but actually examining how useful an explanation is with human users [24,59]. In responding to this call, there is recently a growing line of literature on empirically evaluating the effectiveness of XAI methods (e.g., [14,16,41,69,71]). Yet, principles required for an explanation to be considered helpful in AI-assisted decision making, arguably, still remain to be articulated and comprehensively assessed.…”
Section: Introductionmentioning
confidence: 99%
“…For example, if two XAI techniques were used and compared as separate experimental treatments we added two entries into our database. In total, we identified five articles and 12 experiments [40,56,57,58,59]. In the following, we describe the studies and their results with regard to IDA in detail.…”
Section: Validation Studymentioning
confidence: 99%
“…Another related direction is identifying helpful sentences in product reviews (Gamzu et al, 2021). It is useful to highlight our motivation in support decision making in challenging tasks towards effective human-AI collaboration (Green and Chen, 2019;Lai et al, 2020;Lai and Tan, 2019). Unlike tasks such as textual entailment where models aim to emulate human intelligence, forecasting future outcomes, such as stock markets (Xing et al, 2018) and message popularity (Tan et al, 2014), is challenging both for humans and for machines.…”
Section: Related Workmentioning
confidence: 99%