2019
DOI: 10.1007/978-3-030-29387-1_31
|View full text |Cite
|
Sign up to set email alerts
|

Following Wrong Suggestions: Self-blame in Human and Computer Scenarios

Abstract: This paper investigates the specific experience of following a suggestion by an intelligent machine that has a wrong outcome and the emotions people feel. By adopting a typical task employed in studies on decision-making, we presented participants with two scenarios in which they follow a suggestion and have a wrong outcome by either an expert human being or an intelligent machine. We found a significant decrease in the perceived responsibility on the wrong choice when the machine offers the suggestion. At pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 31 publications
1
1
0
Order By: Relevance
“…Participants are initially resistant to incorrect advice, but consistency in such advice resets their perception. This observation aligns with prior studies in the field of algorithm aversion [50,51,53]. Confidence increases were not uniform across demographics.…”
Section: Resultssupporting
confidence: 88%
“…Participants are initially resistant to incorrect advice, but consistency in such advice resets their perception. This observation aligns with prior studies in the field of algorithm aversion [50,51,53]. Confidence increases were not uniform across demographics.…”
Section: Resultssupporting
confidence: 88%
“…In part, this is because AIs have no social accountability if they are wrong. When relying on an AI leads to negative outcomes, causality tends to be attributed to the AI system, leading to a decrease in trust (Beretta et al, 2019; Horvath et al, 2006).…”
Section: Introductionmentioning
confidence: 99%