2023
DOI: 10.31234/osf.io/mnyz8
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Linguistic Markers of Inherently False AI Communication and Intentionally False Human Communication: Evidence from Hotel Reviews

Abstract: To the human eye, AI-generated outputs of large language models have increasingly become indistinguishable from human-generated outputs. Therefore, to determine the linguistic properties that separate AI-generated text from human-generated text, we used a state-of-the-art chatbot, ChatGPT, and compared how it wrote hotel reviews (Study 1a; N = 1,200 total reviews) and news headlines (Study 1b; N = 900 total headlines) to human-generated counterparts across content (emotion), style (analytic writing, adjectives… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 39 publications
0
0
0
Order By: Relevance
“…The changes to the prompts were derived from past work that showed how minor changes to the self-report design, such as adding contextual information or changing the response scale, can elicit di erent response patterns (Schwarz, 1999). Our findings demonstrate, in line with recent works on the e ect of prompting (Fujita et al, 2022;Gan & Mori, 2023;Lu et al, 2021;Markowitz, 2023;, that minor changes in prompts lead to significant di erences in outputs. For example, adding a study Agreeableness (d = ≠0.14; p < .001; 95%CI [-0.18, -0.10]).…”
Section: Reproducibility Matterssupporting
confidence: 84%
See 2 more Smart Citations
“…The changes to the prompts were derived from past work that showed how minor changes to the self-report design, such as adding contextual information or changing the response scale, can elicit di erent response patterns (Schwarz, 1999). Our findings demonstrate, in line with recent works on the e ect of prompting (Fujita et al, 2022;Gan & Mori, 2023;Lu et al, 2021;Markowitz, 2023;, that minor changes in prompts lead to significant di erences in outputs. For example, adding a study Agreeableness (d = ≠0.14; p < .001; 95%CI [-0.18, -0.10]).…”
Section: Reproducibility Matterssupporting
confidence: 84%
“…Liu et al, 2023;Romera-Paredes & Torr, 2015), for psychological text analysis, presumably due to their ease of use and accessibility. For example, Markowitz (2023), Rathje et al (2023), andZhu et al (2023) reported high performance of ChatGPT as an automated text analysis tool, such as for sentiment analysis, o ensive language, thinking style, or emotion detection. Rathje et al (2023) further concluded that LLMs constitute a viable all-purpose method for psychological text analysis, arguably more convenient than small(er) language models and traditional techniques in NLP, due to their ability to handle diverse tasks within a single model PERILS AND OPPORTUNITIES OF LLMS 15 without needing task-specific adjustments, and their user-friendly design that minimizes the need for complex coding, making them more accessible to psychologists and potentially encouraging broader research engagement.…”
Section: Llms Are Not An All-purpose Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al, 2023;Romera-Paredes & Torr, 2015), for psychological text analysis, presumably due to their ease of use and accessibility. For example, Markowitz (2023), Rathje et al (2023), andZhu et al (2023) reported high performance of ChatGPT as an automated text analysis tool, such as for sentiment analysis, offensive language, thinking style, or emotion detection. Rathje et al (2023) further concluded that LLMs constitute a viable all-purpose method for psychological text analysis, arguably more convenient than small(er) language models and traditional techniques in NLP, due to their ability to handle diverse tasks within a single model PERILS AND OPPORTUNITIES OF LLMS 15 without needing task-specific adjustments, and their user-friendly design that minimizes the need for complex coding, making them more accessible to psychologists and potentially encouraging broader research engagement.…”
Section: Llms Are Not An All-purpose Methodsmentioning
confidence: 99%
“…Moreover, prompting also introduces significant challenges to reproducibility in psychological research, because prompts can be constructed in numerous ways. Past work has shown that slight alterations and modifications in phrasing, context, or order can lead to substantially different responses (Fujita et al, 2022;Gan & Mori, 2023;Lu et al, 2021;Markowitz, 2023;Mishra et al, 2023;.…”
Section: Reproducibility Mattersmentioning
confidence: 99%