2022
DOI: 10.31234/osf.io/rm49a
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Talent Spotting in Crowd Prediction

Abstract: Who is good at prediction? Addressing this question is key to recruiting and cultivating accurate crowds and effectively aggregating their judgments. Recent research on superforecasting has demonstrated the importance of individual, persistent skill in crowd prediction. This chapter takes stock of skill identification measures in probability estimation tasks, and complements the review with original analyses, comparing such measures directly within the same dataset. We classify all measures in five broad categ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(22 citation statements)
references
References 55 publications
0
19
0
Order By: Relevance
“…Because it is the first controlled longitudinal forecasting experiment, future work could use this dataset to firmly establish the degree of bias introduced when these confounds are artificially induced and various existing methods are used to attempt to eliminate them. For example, subsets of forecasters can be deleted from the dataset based on patterns of question difficulty, such that some forecasters are retained for difficult questions, and others for harder questions, to study how well within-item score standardization (Atanasov et al, 2017, 2020; Atanasov & Himmelstein, 2022; Himmelstein et al, 2021) adjusts for item difficulty in practice. Similarly, forecasters can be deleted from the dataset as a function of the timing of their forecasts, to assess temporal confounds in judgmental forecasting theory (Himmelstein, Budescu & Han, 2022).…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Because it is the first controlled longitudinal forecasting experiment, future work could use this dataset to firmly establish the degree of bias introduced when these confounds are artificially induced and various existing methods are used to attempt to eliminate them. For example, subsets of forecasters can be deleted from the dataset based on patterns of question difficulty, such that some forecasters are retained for difficult questions, and others for harder questions, to study how well within-item score standardization (Atanasov et al, 2017, 2020; Atanasov & Himmelstein, 2022; Himmelstein et al, 2021) adjusts for item difficulty in practice. Similarly, forecasters can be deleted from the dataset as a function of the timing of their forecasts, to assess temporal confounds in judgmental forecasting theory (Himmelstein, Budescu & Han, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Past research exploring the utility of this approach has proved fruitful but methodologically challenging. Most research on the psychology of forecasting utilizes one of two sources of data: (a) judgments collected during forecasting tournaments, in which large samples of forecasters self-select from among hundreds of possible forecasting questions, and make their predictions at different points in time, at their leisure (Atanasov & Himmelstein, 2022; Himmelstein et al, 2021; Mellers, Stone, Atanasov, et al, 2015; Morstatter et al, 2019; Tetlock & Gardner, 2016); (b) professional forecasters, who make predictions for a living about finance, sports, weather, health, politics, and more (Garcia, 2003; Han & Budescu, 2019, 2022; Himmelstein, Budescu & Han, 2022; Mandel & Barnes, 2014; Spann & Skiera, 2009). Both data sources inherently include several confounds that make controlled scientific inference difficult.…”
Section: Intersubjective Assessmentmentioning
confidence: 99%
See 3 more Smart Citations