2019
DOI: 10.1525/collabra.282
|View full text |Cite
|
Sign up to set email alerts
|

Predicting High-Level Human Judgment Across Diverse Behavioral Domains

Abstract: Recent advances in machine learning, combined with the increased availability of large natural language datasets, have made it possible to uncover semantic representations that characterize what people know about and associate with a wide range of objects and concepts. In this paper, we examine the power of word embeddings, a popular approach for uncovering semantic representations, for studying high-level human judgment. Word embeddings are typically applied to linguistic and semantic tasks, however we show t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 33 publications
(49 citation statements)
references
References 34 publications
0
45
0
Order By: Relevance
“…We admit that the best regularized regression method is unknown, but feel that LASSO regression is a fair comparison for improper single regression mean imputation methods (i.e. linear and ridge regressions in Gultchin et al, 2019;Hollis et al, 2017;Mandera et al, 2015;Recchia et al, 2015;Richie et al, 2019;Thompson et al, 2018;van Paridon et al, 2019;Westbury et al, 2013). Which type of regularized regression has the more desirable bias in this context is difficult to determine, due to the black-box nature of word vectors, and we leave it for future work.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We admit that the best regularized regression method is unknown, but feel that LASSO regression is a fair comparison for improper single regression mean imputation methods (i.e. linear and ridge regressions in Gultchin et al, 2019;Hollis et al, 2017;Mandera et al, 2015;Recchia et al, 2015;Richie et al, 2019;Thompson et al, 2018;van Paridon et al, 2019;Westbury et al, 2013). Which type of regularized regression has the more desirable bias in this context is difficult to determine, due to the black-box nature of word vectors, and we leave it for future work.…”
Section: Discussionmentioning
confidence: 99%
“…The most common method for semantic norm extrapolation is linear regression, with a possible ridge penalty on the coefficients (Gultchin et al, 2019;Hollis et al, 2017;Mandera et al, 2015;Martıńez-Huertas et al, 2020;Paetzold et al, 2016;Recchia et al, 2015;Richie et al, 2019;Thompson et al, 2018;Turton et al, 2020;Utsumi, 2018;van Paridon et al, 2019;Westbury et al, 2013). Thus, we chose a form of linear regression as our stand-in in the simulations for existing methods of extrapolating semantic norms.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, even though distributed semantic representations predict the semantic relatedness of concepts with high accuracy and can be applied to tens of thousands of common concepts, they do not possess complex featural and relational representations for concepts. In response, researchers have begun to combine distributed semantic representations with survey and experimental data, such as feature norms and participant ratings (Andrews et al, 2009;Derby et al, 2019;Lu et al, 2019;Richie et al, 2019). In most such applications, distributed semantic representations for concepts serve as inputs into more complex models that are fine-tuned on participant data, and are thus able to proxy the (often structured) knowledge at play in the participants' responses.…”
Section: Introductionmentioning
confidence: 99%
“…Such semantic representations describe the strength of association between judgment questions and response options, and can thus quantitatively model associative processes in probability judgment, factual judgment, social judgment, political judgment, and multiattribute choice (Bhatia, 2017b(Bhatia, , 2017cBhatia et al, 2018;Bhatia & Walasek, 2019;Caliskan et al, 2017;Garg et al, 2018;Hopkins, 2018;DECISIONS FROM MEMORY 58 Holtzman et al, 2011). By specifying what people know and associate with judgment and decision targets, vector semantic representations also allow researchers to build powerful predictive models that take semantic vectors as inputs and output a judgment or decision (Bhatia, in press;Bhatia & Stewart, 2018;Richie et al, 2019). In this paper we further illustrate the value of semantic vectors by showing how they can be used to model the content of thoughts during naturalistic decision making.…”
Section: Semantic Vectors For Knowledge Representationmentioning
confidence: 99%