How should we combine disagreeing expert judgments on the likelihood of an event? A common solution is simple averaging, which allows independent individual errors to cancel out. However, judgments can be correlated due to an overlap in their information, resulting in a miscalibration in the simple average. Optimal weights for weighted averaging are typically unknown and require past data to estimate reliably. This paper proposes an algorithm to aggregate probabilistic judgments under shared information. Experts are asked to report a prediction and a meta-prediction. The latter is an estimate of the average of other individuals’ predictions. In a Bayesian setup, I show that if average prediction is a consistent estimator, the percentage of predictions and meta-predictions that exceed the average prediction should be the same. An “overshoot surprise” occurs when the two measures differ. The Surprising Overshoot algorithm uses the information revealed in an overshoot surprise to correct for miscalibration in the average prediction. Experimental evidence suggests that the algorithm performs well in moderate to large samples and in aggregation problems where individuals disagree in their predictions.
Simple average of subjective forecasts is known to be effective in estimating uncertain quantities. However, benefits of averaging could be limited when forecasters have shared information, resulting in overrepresentation of the shared information in average forecast. This article proposes a simple incentive-based solution to the shared-information problem. Experts are grouped with nonexperts in forecasting crowds and they are rewarded for the accuracy of crowd average instead of their individual accuracy. In equilibrium, experts anticipate the overrepresentation of shared information and extremize their forecasts toward their private information to boost crowd accuracy. The self-extremization in individual expert forecasts alleviates the shared-information problem. Experimental evidence suggests that incentives for crowd accuracy could induce self-extremization even in small crowds where winner-take-all contests (another incentive-based solution) are not effective.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.