Proceedings of the ACM Web Conference 2022 2022
DOI: 10.1145/3485447.3512240
|View full text |Cite
|
Sign up to set email alerts
|

Will You Accept the AI Recommendation? Predicting Human Behavior in AI-Assisted Decision Making

Abstract: In AI-assisted decision-making, it is crucial but challenging for humans to achieve appropriate reliance on AI. This paper approaches this problem from a human-centered perspective, "human selfconfidence calibration". We begin by proposing an analytical framework to highlight the importance of calibrated human self-confidence. In our first study, we explore the relationship between human selfconfidence appropriateness and reliance appropriateness. Then in our second study, We propose three calibration mechanis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(4 citation statements)
references
References 96 publications
0
4
0
Order By: Relevance
“…This is because they tend to believe they are correct when they are confident (despite this is not always true as humans may sometimes suffer from the "Dunning-Kruger effect"). More specifically, it was found that when the performance information about the AI model is absent, humans are more likely to rely on their own judgement when their own decision confidence is high, while they are more receptive to AI recommendation when their own decision confidence is low (Wang, Lu, and Yin 2022;Chong et al 2022). In addition, the level of agreement between the AI recommendation and humans' independent judgement on those decision making tasks where humans are highly confident about their own judgement also significantly impacts humans' trust and reliance on AI-the higher the level of agreement, the more humans trust and rely on the AI recommendation (Lu and Yin 2021).…”
Section: Accounting For Engagement Behavior When Ai Assists Individua...mentioning
confidence: 99%
“…This is because they tend to believe they are correct when they are confident (despite this is not always true as humans may sometimes suffer from the "Dunning-Kruger effect"). More specifically, it was found that when the performance information about the AI model is absent, humans are more likely to rely on their own judgement when their own decision confidence is high, while they are more receptive to AI recommendation when their own decision confidence is low (Wang, Lu, and Yin 2022;Chong et al 2022). In addition, the level of agreement between the AI recommendation and humans' independent judgement on those decision making tasks where humans are highly confident about their own judgement also significantly impacts humans' trust and reliance on AI-the higher the level of agreement, the more humans trust and rely on the AI recommendation (Lu and Yin 2021).…”
Section: Accounting For Engagement Behavior When Ai Assists Individua...mentioning
confidence: 99%
“…One approach to building humans' mental models is through data-driven methods. For example, in a loan approval task, Wang et al [95] construct a general human prediction model via a neural network with crowdsourcing data.…”
Section: Mental Model In Human-ai Collaborationmentioning
confidence: 99%
“…In our second goal, we use the cognitive modeling approach to understand how a human's reliance policy depends on a number of factors related to the human and the AI. Previous research has shown that a human's confidence in their own decision influences the tendency to rely on AI assistance (Lu and Yin, 2021;Pescetelli et al, 2021;Wang et al, 2022). In addition, reliance on the AI is also affected by the AI's confidence in its decision (Zhang et al, 2020).…”
Section: Introductionmentioning
confidence: 99%