In recent years, a number of crowd aggregation approaches have been proposed to combine the judgments of different individuals in problems where decision-makers do not have records of the individuals’ past performance in that domain. However, it is often possible to obtain a measure of the individuals’ past performance in other domains. The current article explores the extent to which individuals’ relative expertise in one domain can be used to weight their judgments in another domain. Over three experiments comprising a range of decision problems from art, science, sport, and a test of emotional intelligence, we compare the performance of aggregation approaches that do not use individuals’ past performance to those that weight by individuals’ past performance on questions from the same domain (within-domain weighting) or from a different domain (cross-domain weighting). Our results show that although within-domain weighting generally outperforms all other aggregation approaches, cross-domain weighting can be as effective as within-domain weighting in some circumstances. We present a simple model of the relationship between within-domain and cross-domain performance and discuss the conditions under which cross-domain weighting is likely to be effective. Our results demonstrate the potential of cross-domain weighting in problems where records of individuals’ past performance in the domain of interest are unavailable.
How well can social scientists predict societal change, and what processes underlie their predictions? To answer these questions, we ran two forecasting tournaments testing accuracy of predictions of societal change in domains commonly studied in the social sciences: ideological preferences, political polarization, life satisfaction, sentiment on social media, and gender-career and racial bias. Following provision of historical trend data on the domain, social scientists submitted pre-registered monthly forecasts for a year (Tournament 1; N=86 teams/359 forecasts), with an opportunity to update forecasts based on new data six months later (Tournament 2; N=120 teams/546 forecasts). Benchmarking forecasting accuracy revealed that social scientists’ forecasts were on average no more accurate than simple statistical models (historical means, random walk, or linear regressions) or the aggregate forecasts of a sample from the general public (N=802). However, scientists were more accurate if they had scientific expertise in a prediction domain, were interdisciplinary, used simpler models, and based predictions on prior data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.