Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems 2021
DOI: 10.1145/3411764.3445717
|View full text |Cite
|
Sign up to set email alerts
|

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

7
247
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 298 publications
(303 citation statements)
references
References 62 publications
7
247
2
Order By: Relevance
“…It has long been reported that algorithms interacting with humans can negatively impact the human. 9 In our case, the concern might be that users can develop an over-reliance on Polyjuice (Bansal et al, 2021) and hastily accept its generations. Not only can this decrease users' creativity (Green et al, 2014), but it may bias their analysis process: as discussed in §7, Polyjuice generation is not exhaustive, and may favor some perturbation patterns over others in unpredictable ways.…”
Section: Ethical Considerationsmentioning
confidence: 99%
“…It has long been reported that algorithms interacting with humans can negatively impact the human. 9 In our case, the concern might be that users can develop an over-reliance on Polyjuice (Bansal et al, 2021) and hastily accept its generations. Not only can this decrease users' creativity (Green et al, 2014), but it may bias their analysis process: as discussed in §7, Polyjuice generation is not exhaustive, and may favor some perturbation patterns over others in unpredictable ways.…”
Section: Ethical Considerationsmentioning
confidence: 99%
“…These models are black boxes and do not offer interpretable explanations for their predictions; there is no notion of step-by-step deduction. This limitation prevents users from understanding and accommodating models' affordances (Hase and Bansal, 2020;Bansal et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…By combining two types of intelligence, these emerging sociotechnical systems (i.e., human+AI teams) were expected to perform better than either people or AIs alone [35,36]. Recent studies, however, show that although human+AI teams typically outperform people working alone, their performance is usually inferior to the AI's [2,7,9,26,30,41,59]. There is evidence that instead of combining their own insights with suggestions generated by the computational models, people frequently overrely on the AI, following its suggestions even when those suggestions are wrong and the person would have made a better choice on their own [9,30,41].…”
Section: Introductionmentioning
confidence: 99%
“…Explainable AI (XAI), an approach where AI's recommendations are accompanied by explanations or rationales, was intended to address the problem of overreliance: By giving people an insight into how the machine arrived at its recommendations, the explanations were supposed to help them identify the situations in which AI's reasoning was incorrect and the suggestion should be rejected. However, evidence suggests that explainable systems, also, have not had substantial success in reducing human overreliance on the AIs: when the AI suggests incorrect or suboptimal solutions, people still on average make poorer final decisions than they would have without AI's assistance [2,7,30,40,67].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation