2023
DOI: 10.2139/ssrn.4380365
|View full text |Cite
|
Sign up to set email alerts
|

A Manager and an AI Walk into a Bar: Does ChatGPT Make Biased Decisions Like We Do?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
10
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 41 publications
1
10
0
Order By: Relevance
“…On the one hand, LLMs replicated results from tasks related to personality traits (Caron & Srivastava, 2022), framing effects (Chen et al, 2023), as well as political attitudes and party preferences (Argyle et al, 2023). For example, Caron and Srivastava (2022) surveyed Reddit users about their “Big Five” personalities and trained LLMs with these user‐specific contextual data.…”
Section: Using Llms To Mimic Human Behaviormentioning
confidence: 62%
See 2 more Smart Citations
“…On the one hand, LLMs replicated results from tasks related to personality traits (Caron & Srivastava, 2022), framing effects (Chen et al, 2023), as well as political attitudes and party preferences (Argyle et al, 2023). For example, Caron and Srivastava (2022) surveyed Reddit users about their “Big Five” personalities and trained LLMs with these user‐specific contextual data.…”
Section: Using Llms To Mimic Human Behaviormentioning
confidence: 62%
“…Although efforts to substitute human respondents with LLMs are relatively new, several studies have already conducted comparisons of human and silicon samples. These comparisons stem from various domains (e.g., human-computer-interaction, general psychology, social psychology) and consider a wide range of tasks and settings (e.g., cognitive reflection task, Hagendorff et al, 2023; On the one hand, LLMs replicated results from tasks related to personality traits (Caron & Srivastava, 2022), framing effects (Chen et al, 2023), as well as political attitudes and party preferences (Argyle et al, 2023). For example, Caron and Srivastava (2022) surveyed Reddit users about their "Big Five" personalities and trained LLMs with these user-specific contextual data.…”
Section: Using Llms To Mimic Human Behaviormentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, a study (Chen et al, 2023) specifically investigated behavioral biases relevant to operations management. ChatGPT exhibits human-like biases in complex, ambiguous, and implicit problems, such as conjunction bias, probability weighting, framing effects, salience of anticipated regret, and reference dependence.…”
Section: Current Research On the Integration Of Chatgpt In Toc Tpmentioning
confidence: 99%
“…Universal self-consistency for LLM generation (Chen Y. et al, 2023). This approach utilizes multiple reasoning paths sampled from LLMs and then select the most consistent answer among multiple candidates.…”
Section: Model Examples In the Corresponding Domain Observationsmentioning
confidence: 99%