2023
DOI: 10.31234/osf.io/swfn6
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

People devalue generative AI’s competence but not its advice in addressing societal and personal challenges

Abstract: The release of ChatGPT has received significant attention from both scientists and the public. Despite its acknowledged capabilities and potential applications, the perception and reaction of individuals to content generated by ChatGPT is not well understood. To address this, we focus on two important application domains: recommendations for (i) societal challenges and (ii) personal challenges. In two preregistered experimental studies, we investigate how individuals evaluate the author’s competence, the quali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…In line with these findings, algorithm aversion describes the consumers' preference for a human when a task is subjective by nature [31] or concerned with moral decisions because machines are thought to lack a mind and emotions [32]. AI has also been perceived as less competent in giving advice for addressing societal challenges [33]. Applied to AI authorship, Tandoc, Yao, and Wu [23] found a decrease in source and message credibility when the AI was perceived to write non-objectively.…”
Section: Algorithm Aversionmentioning
confidence: 91%
“…In line with these findings, algorithm aversion describes the consumers' preference for a human when a task is subjective by nature [31] or concerned with moral decisions because machines are thought to lack a mind and emotions [32]. AI has also been perceived as less competent in giving advice for addressing societal challenges [33]. Applied to AI authorship, Tandoc, Yao, and Wu [23] found a decrease in source and message credibility when the AI was perceived to write non-objectively.…”
Section: Algorithm Aversionmentioning
confidence: 91%