2020
DOI: 10.48550/arxiv.2010.07938
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making

Abstract: Several strands of research have aimed to bridge the gap between artificial intelligence (AI) and human decisionmakers in AI-assisted decision-making, where humans are the consumers of AI model predictions and the ultimate decision-makers in high-stakes applications. However, people's perception and understanding is often distorted by their cognitive biases, like confirmation bias, anchoring bias, availability bias, to name a few. In this work, we use knowledge from the field of cognitive science to account fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
20
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(24 citation statements)
references
References 46 publications
2
20
0
Order By: Relevance
“…To model deficiencies or biases in understanding and reasoning in humans, researchers have investigated Bayesian representations and similar. For instance, (Hierarchical) Bayesian models can demonstrate how humans reason about likelihoods of outcomes and likelihoods of scenarios [18,19,105,125,129,139], and how human reasoning shows flawed interpretations or skewed scales of importance (high or low). This illustrates how humans can under-react to probabilities [18,19] or estimate likelihoods based on observed frequencies.…”
Section: Applications and Recent Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To model deficiencies or biases in understanding and reasoning in humans, researchers have investigated Bayesian representations and similar. For instance, (Hierarchical) Bayesian models can demonstrate how humans reason about likelihoods of outcomes and likelihoods of scenarios [18,19,105,125,129,139], and how human reasoning shows flawed interpretations or skewed scales of importance (high or low). This illustrates how humans can under-react to probabilities [18,19] or estimate likelihoods based on observed frequencies.…”
Section: Applications and Recent Resultsmentioning
confidence: 99%
“…As noted above, biases can be represented in both interpretations of information as well as in the exhibited behavior. This can be demonstrated by Bayesian models of cognitive biases [105], biased probability judgments [139], sensitivity to risk/uncertainty [26], and similar topics relating to representation and interpretation of data/statistics [19,70]. Biases can also be exhibited in how people respond to others or digital avatars (e.g.…”
Section: Applications and Recent Resultsmentioning
confidence: 99%
“…Thus we expected that AI inferences presented at the same time as initial analysis would be more influential than when the inferences are presented after an initial assessment. Research on the explanation of AI inferences frames opportunities for further study of the influences of designs for workflow of human-AI collaboration, including altering the timing of AI-assistance and forcing users to spend more time on instances where AI inferences present higher uncertainty [4,10,32,67,73].…”
Section: Workflow Considerations For Human-ai Teamsmentioning
confidence: 99%
“…In findings related to cognitive effort, research on "cognitive forcing" has explored methods for pushing human decision makers to spend more time with deliberating about problems [10,32,67,73]. Work in this area includes making AI assistance only available upon request or employing a "slow algorithm" that loads while the user waits to input their decision.…”
Section: Workflow Considerations For Human-ai Teamsmentioning
confidence: 99%
“…One key challenge in AI-assisted decision-making is whether the human-AI team can achieve complementary performance, i.e., the collaborative decision outcome outperforming human or AI alone [6,54,103]. A critical step toward complementary performance is that human decision-makers could properly determine when to take the AI's suggestion into consideration and when to be skeptical about it [14,81,103]. Since well-calibrated AI confidence scores can represent the model's actual correctness likelihood (CL) [5,6,41], several recent studies propose different designs to help humans allocate appropriate trust to AI based on this information [6,81,103].…”
Section: Introductionmentioning
confidence: 99%