2023
DOI: 10.31219/osf.io/tg79n
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Perils and Opportunities in Using Large Language Models in Psychological Research

Suhaib Abdurahman,
Mohammad Atari,
Farzan Karimi-Malekabadi
et al.

Abstract: The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, either as a human-like entity used as a model for the human psyche or as a general text-analysis tool. However, carelessly using LLMs in psychological studies, a trend we rhetorically refer to as ``GPTology,'' can have negative consequences, especially given the convenient access to models such as ChatGPT. We elucidate the promises, limitations, and ethical considerations of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 42 publications
0
7
0
Order By: Relevance
“…From a more fundamental perspective, researchers question whether LLMs can validly be used as models of human thought since an LLM's working principle involves computing the most probable next text element in a sequence. This process differs considerably from a human participant's feelings and reasoning abilities (e.g., Abdurahman et al, 2023;Demszky et al, 2023).…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…From a more fundamental perspective, researchers question whether LLMs can validly be used as models of human thought since an LLM's working principle involves computing the most probable next text element in a sequence. This process differs considerably from a human participant's feelings and reasoning abilities (e.g., Abdurahman et al, 2023;Demszky et al, 2023).…”
Section: Introductionmentioning
confidence: 93%
“…If LLMs such as GPT act like rational agents, their results cannot readily be used to explain or predict consumers' bounded rational decision-making. Besides the differences in risk preferences and system 1 processing, GPT, specifically, can hardly mimic interpersonal differences, which is crucial for psychological or marketing research studies (e.g., Abdurahman et al, 2023;Park et al, 2023;Santurkar et al, 2023). However, given that the field is evolving rapidly (i.e., industrial players, such as OpenAI and Google, release improved LLMs in quick succession), it is reasonable to assume that future LLM implementations mimic human behavior more closely than current implementations do.…”
Section: The Way Forwardmentioning
confidence: 99%
“…It has been argued that the ability to generate domain-specific and structured data rapidly, as well as convert structured knowledge into natural sentences, opens new possibilities for data collection ( Ding et al, 2023 ; see Abdurahman et al, 2023 , for a review). The use of GPT-3 for data annotation has shown promising results, with its accuracy and intercoder agreement surpassing those of human annotators in many Natural-Language Processing (NLP) tasks ( Wang et al, 2021 ; Gilardi et al, 2023 ; Rathje et al, 2023 ; Webb et al, 2023 ).…”
Section: Advantages and Limits Of Automatic Cultural Annotationmentioning
confidence: 99%
“…Note that we strongly advocate for pairing LLM methods with other more established research techniques in all studies where it is possible, enabling case-by-case convergence testing and facilitating future meta-analyses. In all, this paper does not seek to debate whether LLMs can ( Abdurahman et al, 2023 ) or should ( Crockett and Messeri, 2023 ) be used in science (see also Brinkmann et al, 2023 ); instead, it provides a concrete how-to guide for applying these models to large cultural datasets.…”
Section: Introductionmentioning
confidence: 99%
“…Note upfront that we strongly advocate for pairing LLM methods with other more established research techniques in all studies where it is possible, enabling case-by-case convergence testing and facilitating future meta-analyses. In all, this paper does not seek to debate whether LLMs can (Abdurahman et al, 2023) or should (Crockett & Messeri, 2023) be used in science; instead, it provides a concrete how-to guide for applying these models to large cultural datasets.…”
Section: Introductionmentioning
confidence: 99%