Most TI discussion papers can be downloaded at http://www.tinbergen.nl.
Abstract:We examine cooperative behavior when large sums of money are at stake, using data from the TV game show "Golden Balls". At the end of each episode, contestants play a variant on the classic Prisoner's Dilemma for large and widely ranging stakes averaging over $20,000. Cooperation is surprisingly high for amounts that would normally be considered consequential but look tiny in their current context, what we call a "big peanuts" phenomenon. Utilizing the prior interaction among contestants, we find evidence that people have reciprocal preferences. Surprisingly, there is little support for conditional cooperation in our sample. That is, players do not seem to be more likely to cooperate if their opponent might be expected to cooperate. Further, we replicate earlier findings that males are less cooperative than females, but this gender effect reverses for older contestants because men become increasingly cooperative as their age increases. JEL: C72, C93, D03
Experiments frequently use a random incentive system (RIS), where only tasks that are randomly selected at the end of the experiment are for real. The most common type pays every subject one out of her multiple tasks (within-subjects randomization). Recently, another type has become popular, where a subset of subjects is randomly selected, and only these subjects receive one real payment (betweensubjects randomization). In earlier tests with simple, static tasks, RISs performed well. The present study investigates RISs in a more complex, dynamic choice experiment. We find that between-subjects randomization reduces risk aversion. While within-subjects randomization delivers unbiased measurements of risk aversion, it does not eliminate carry-over effects from previous tasks. Both types generate an increase in subjects' error rates. These results suggest that caution is warranted when applying RISs to more complex and dynamic tasks.
The quality of decisions depends on the accuracy of estimates of relevant quantities. According to the wisdom of crowds principle, accurate estimates can be obtained by combining the judgements of different individuals 1,2. This principle has been successfully applied to improve, for example, economic forecasts 3-5 , medical judgements 6-9 and meteorological predictions 10-13. Unfortunately, there are many situations in which it is infeasible to collect judgements of others. Recent research proposes that a similar principle applies to repeated judgements from the same person 14. This paper tests this promising approach on a large scale in a real-world context. Using proprietary data comprising 1.2 million observations from three incentivized guessing competitions, we find that within-person aggregation indeed improves accuracy and that the method works better when there is a time delay between subsequent judgements. However, the benefit pales against that of between-person aggregation: the average of a large number of judgements from the same person is barely better than the average of two judgements from different people. Many human decisions, whether in the business, political, medical or personal domain, require the decision-maker to estimate unknown quantities. One way to improve accuracy is to combine the estimates of a group of individuals. Aggregated estimates generally outperform most and sometimes all of the underlying estimates, and are often close to the true value. This phenomenon has become known as 'the wisdom of crowds' 1,2. It arises from the statistical principle that aggregation of imperfect estimates diminishes the role of errors 15-18. Generally, one has to combine only a few estimates to get most of the effect 19. The phenomenon was first described in Nature by the renowned British scientist Sir Francis Galton 20. Galton witnessed a weight judging competition at the 1906 West of England Fat Stock and Poultry Exhibition, where visitors could win a prize by paying six pence and estimating the weight of an exhibited ox after it had been "slaughtered and dressed". Galton collected all 800 tickets with estimates and found that the aggregate judgement of the group closely approximated the true value: the mean judgement was 1,197 lb, and the true value was 1,198 lb 21,22. Similar results have since been observed in a wide range of experiments 23-29. Recent research proposes that the same principle applies to repeated judgements from the same person 14. Laboratory experiments confirm that estimation accuracy can indeed be improved by aggregating estimates from a single individual 16,30-35. The benefit of within-person aggregation reflects what has been dubbed 'the wisdom of the inner crowd' , and can potentially boost the quality of individual decision making 36. This paper analyses within-person aggregation outside the psychological laboratory. We use three large proprietary data sets from
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.