Experimental studies of choice behavior document distinct, and sometimes contradictory, deviations from maximization. For example, people tend to overweight rare events in oneshot decisions under risk, and to exhibit the opposite bias when they rely on past experience.The common explanations of these results assume that the contradicting anomalies reflect situation-specific processes that involve the weighting of subjective values and the use of simple heuristics. The current paper analyzes 14 choice anomalies that have been described by different models, including the Allais, St. Petersburg, and Ellsberg paradoxes, and the reflection effect. Next, it uses a choice prediction competition methodology to clarify the interaction between the different anomalies. It focuses on decisions under risk (known payoff distributions) and under ambiguity (unknown probabilities), with and without feedback concerning the outcomes of past choices. The results demonstrate that it is not necessary to assume situation-specific processes. The distinct anomalies can be captured by assuming high sensitivity to the expected return and four additional tendencies: pessimism, bias toward equal weighting, sensitivity to payoff sign, and an effort to minimize the probability of immediate regret. Importantly, feedback increases sensitivity to probability of regret. Simple abstractions of these assumptions, variants of the model Best Estimate And Sampling Tools (BEAST), allow surprisingly accurate ex ante predictions of behavior. Unlike the popular models, BEAST does not assume subjective weighting functions or cognitive shortcuts.Rather, it assumes the use of sampling tools and reliance on small samples, in addition to the estimation of the expected values.
Abstract:A choice prediction competition is organized that focuses on decisions from experience in market entry games (http://sites.google.com/site/gpredcomp/ and http://www.mdpi.com/si/games/predict-behavior/). The competition is based on two experiments: An estimation experiment, and a competition experiment. The two experiments use the same methods and subject pool, and examine games randomly selected from the same distribution. The current introductory paper presents the results of the estimation experiment, and clarifies the descriptive value of several baseline models. The experimental results reveal the robustness of eight behavioral tendencies that were documented in previous studies of market entry games and individual decisions from experience. The best baseline model (I-SAW) assumes reliance on small samples of experiences, and strong inertia when the recent results are not surprising. The competition experiment will be run in May 2010 (after the completion of this introduction), but they will not be revealed until September. To participate in the competition, researchers are asked to E-mail the organizers models (implemented in computer programs) that read the incentive structure as input, and derive the predicted behavior as an output. The submitted models will be ranked based on their prediction error. The winners of the competition will be invited to publish a paper that describes their model.
OPEN ACCESS
Three experiments are presented that explore the assertion that loss aversion and diminishing sensitivity drive the effect of experience on choice behavior. The experiments are focused on repeated choice tasks where decision makers choose repeatedly between alternatives and get feedback after each choice. Experiments 1a and 1b show that behavioral tendencies that were previously interpreted as indications of loss aversion in decisions from experience are better described as products of diminishing sensitivity to absolute payoffs. Experiment 2 highlights a nominal magnitude effect: A decrease in the magnitude of the nominal payoffs eliminates the evidence for diminishing sensitivity. These and related previous results can be captured with a model that assumes reliance on small samples of subjective experiences, and an increase in diminishing sensitivity with payoff variability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.