A common approach to improving probabilistic forecasts is to identify and leverage the forecasts from experts in the crowd based on forecasters' performance on prior questions with known outcomes. However, such information is often unavailable to decision-makers on many forecasting problems, and thus it can be difficult to identify and leverage expertise. In the current paper, we propose a novel algorithm for aggregating probabilistic forecasts using forecasters' meta-predictions about what other forecasters will predict. We test the performance of an extremised version of our algorithm against current forecasting approaches in the literature and show that our algorithm significantly outperforms all other approaches on a large collection of 500 binary decision problems varying in five levels of difficulty. The success of our algorithm demonstrates the potential of using meta-predictions to leverage latent expertise in environments where forecasters' expertise cannot otherwise be easily identified.
Modern forecasting algorithms use the wisdom of crowds to produce forecasts better than those of the best identifiable expert. However, these algorithms may be inaccurate when crowds are systematically biased or when expertise varies substantially across forecasters. Recent work has shown that meta-predictions—a forecast of the average forecasts of others—can be used to correct for biases even when no external information, such as forecasters’ past performance, is available. We explore whether meta-predictions can also be used to improve forecasts by identifying and leveraging the expertise of forecasters. We develop a confidence-based version of the Surprisingly Popular algorithm proposed by Prelec, Seung, and McCoy. As with the original algorithm, our new algorithm is robust to bias. However, unlike the original algorithm, our version is predicted to always weight forecasters with more informative private signals more than forecasters with less informative ones. In a series of experiments, we find that the modified algorithm does a better job in weighting informed forecasters than the original algorithm and show that individuals who are correct more often on similar decision problems contribute more to the final decision than other forecasters. Empirically, the modified algorithm outperforms the original algorithm for a set of 500 decision problems. This paper was accepted by Yan Chen, decision analysis.
In recent years, a number of crowd aggregation approaches have been proposed to combine the judgments of different individuals in problems where decision-makers do not have records of the individuals’ past performance in that domain. However, it is often possible to obtain a measure of the individuals’ past performance in other domains. The current article explores the extent to which individuals’ relative expertise in one domain can be used to weight their judgments in another domain. Over three experiments comprising a range of decision problems from art, science, sport, and a test of emotional intelligence, we compare the performance of aggregation approaches that do not use individuals’ past performance to those that weight by individuals’ past performance on questions from the same domain (within-domain weighting) or from a different domain (cross-domain weighting). Our results show that although within-domain weighting generally outperforms all other aggregation approaches, cross-domain weighting can be as effective as within-domain weighting in some circumstances. We present a simple model of the relationship between within-domain and cross-domain performance and discuss the conditions under which cross-domain weighting is likely to be effective. Our results demonstrate the potential of cross-domain weighting in problems where records of individuals’ past performance in the domain of interest are unavailable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.