2021
DOI: 10.31234/osf.io/suf2r
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Natural Language Analyzed with AI-based Transformers Predict Traditional Well-Being Measures Approaching the Theoretical Upper Limits in Accuracy

Abstract: We show that using a recent break-through in artificial intelligence –transformers–, psychological assessments from text-responses can approach theoretical upper limits in accuracy, converging with standard psychological rating scales. Text-responses use people's primary form of communication –natural language– and have been suggested as a more ecologically-valid response format than closed-ended rating scales that dominate social science. However, previous language analysis techniques left a gap between how a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(10 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…The findings that participants rate free text response to be more precise in communicating mental health compared to rating scales is consistent with recent findings showing that QCLA has a high validity in measuring mental health. For example, Kjell et al [ 2 , 7 ] used natural language processing algorithms (e.g., Latent Semantic Analysis [LSA]; [ 17 ]; BERT, [ 5 ]) to map the text responses to vectors representing participants’ descriptions of their mental health. They then used machine learning (ML, multiple linear regression) to build a model to predict ratings scales for mental health (e.g., PHQ-9 and GAD-7).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…The findings that participants rate free text response to be more precise in communicating mental health compared to rating scales is consistent with recent findings showing that QCLA has a high validity in measuring mental health. For example, Kjell et al [ 2 , 7 ] used natural language processing algorithms (e.g., Latent Semantic Analysis [LSA]; [ 17 ]; BERT, [ 5 ]) to map the text responses to vectors representing participants’ descriptions of their mental health. They then used machine learning (ML, multiple linear regression) to build a model to predict ratings scales for mental health (e.g., PHQ-9 and GAD-7).…”
Section: Discussionmentioning
confidence: 99%
“…Data of participants that did not complete the study was not saved. Participants with MDD and GAD have been systematically investigating using QCLA in several previous studies (e.g., [ 2 , 7 , 11 ]). A drawback with this choice was that a smaller group of the participants only had a GAD diagnosis (N = 21), whereas most of them had MDD, which is the focus of the current study (N = 145 and sometimes also GAD).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the analyses we focused primarily on comparing the free text and rating scales, where we expected the descriptive and selected words condition to have intermediate rating scores. The choice of these four formats was based on the fact that we previously have been using them in earlier studies (e.g., Kjell et al, 2019;Kjell, Sikström, Kjell, Schwartz, 2021). We then asked the respondents to rate their view of the response format on the following 12 dimensions.…”
Section: Introductionmentioning
confidence: 99%