2019
DOI: 10.1287/mnsc.2018.3121
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Risk Perception: New Insights from Data Science

Abstract: We outline computational techniques for predicting perceptions of risk. Our approach uses the structure of word distribution in natural language data to uncover rich representations for a very large set of naturalistic risk sources. With the application of standard machine learning techniques, we are able to accurately map these representations onto participant risk ratings. Unlike existing methods in risk perception research, our approach does not require any specialized participant data and is capable of gen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
75
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1
1

Relationship

3
4

Authors

Journals

citations
Cited by 66 publications
(97 citation statements)
references
References 60 publications
1
75
0
Order By: Relevance
“…Given that we have fewer judgment targets (<=200) than independent variables (300), we chose three regularized regression techniques: lasso, ridge, and support vector machines. We also included k-nearest neighbors (KNN) regression since it was used in work on predicting risk perceptions with word embeddings [19]. We describe our secondary models (lasso, SVR, KNN) more in the SI, and instead focus here on our primary model, ridge regression.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Given that we have fewer judgment targets (<=200) than independent variables (300), we chose three regularized regression techniques: lasso, ridge, and support vector machines. We also included k-nearest neighbors (KNN) regression since it was used in work on predicting risk perceptions with word embeddings [19]. We describe our secondary models (lasso, SVR, KNN) more in the SI, and instead focus here on our primary model, ridge regression.…”
Section: Methodsmentioning
confidence: 99%
“…When a new entity is presented to be judged, it applies its learned mapping to the semantic vector for the new entity to predict participant judgments for that entity. Some prior work has used this approach to generate word norms in psycholinguistic studies [18], and to study risk perception [19]. In principle, this approach can be applied to any type of human judgment, as long as judgment targets are in the word embeddings vocabulary.…”
Section: From Representation To Judgmentmentioning
confidence: 99%
“… 2019 ). In another instance, the same author conducted experiments with 300 participants and predicted the perceived risk of several risk sources using a vector-space representation of natural language, concluding that the word distribution of language successfully captures human perception of risk (Bhatia 2019 ). Similar work has been conducted by Jaidka et al.…”
Section: Introductionmentioning
confidence: 99%
“…Given that we have fewer judgment targets (< = 200) than independent variables (300), we chose three regularized regression techniques: lasso, ridge, and support vector machines. We also included k-nearest neighbors (KNN) regression since it was used in work on predicting risk perceptions with word embeddings (Bhatia, 2019). We describe our secondary models (lasso, SVR, KNN) more in the SI, and instead focus here on our primary model, ridge regression.…”
Section: Methodsmentioning
confidence: 99%
“…When a new entity is presented to be judged, it applies its learned mapping to the semantic vector for the new entity to predict participant judgments for that entity. Some prior work has used this approach to generate word norms in psycholinguistic studies (Hollis, Westbury, & Lefsrud, 2017), and to study risk perception (Bhatia, 2019) and brand judgments (Bhatia & Olivola, 2018). In principle, this approach can be applied to any type of human judgment, as long as judgment targets are in the word embeddings vocabulary.…”
Section: From Representation To Judgmentmentioning
confidence: 99%