2020
DOI: 10.1073/pnas.1915841117
|View full text |Cite
|
Sign up to set email alerts
|

Scaling up psychology via Scientific Regret Minimization

Abstract: Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models-the biggest errors they make in predicting the data-to discover what might be missing from those models. However… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
39
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 68 publications
(40 citation statements)
references
References 72 publications
1
39
0
Order By: Relevance
“…Bourgin et al (2019) took advantage of the larger sample sizes that can be obtained through virtual labs to scale up this approach, collecting human decisions for over 10,000 pairs of gambles. The resulting data set can be used to evaluate models of decision-making and is at a scale where machine learning methods can be used to augment the insights of human researchers (Agrawal, Peterson, and Griffiths 2020).…”
Section: Individualsmentioning
confidence: 99%
“…Bourgin et al (2019) took advantage of the larger sample sizes that can be obtained through virtual labs to scale up this approach, collecting human decisions for over 10,000 pairs of gambles. The resulting data set can be used to evaluate models of decision-making and is at a scale where machine learning methods can be used to augment the insights of human researchers (Agrawal, Peterson, and Griffiths 2020).…”
Section: Individualsmentioning
confidence: 99%
“…The Hofman team provides two examples, one from Athey et al ( 2011) who come up with an explanatory model of bidding behavior in an auction and use it to predict outcomes that are then tested against the actual outcomes. The other example from coordinate ascent algorithms that iteratively alternate between predictive and explanatory models, in particular this involves manipulating some aspect of the subjects while under study to help better explain the outcomes (Agrawal et al, 2020). Somehow, such models should provide benefits that are greater than explanatory or predictive models done in isolation because they can predict "magnitude and direction of individual outcomes under changes or interventions" (Hofman et al 2021:Table 2).…”
Section: Integrative Modelingmentioning
confidence: 99%
“…They focused on risky choices and extensively studied issues in decision theory [39,40]. In addition, Agrawal et al proposed methodologies for building models and identifying novel phenomena in large datasets [41]. To overcome noise artifacts included in the datasets, they utilized sufficiently large datasets with data-driven models.…”
Section: Introductionmentioning
confidence: 99%