We study whether some of the most important models of decision-making under uncertainty are uniformly learnable, in the sense of PAC (probably approximately correct) learnability. Many studies in economics rely on Savage's model of (subjective) expected utility. The expected utility model is known to predict behavior that runs counter to how many agents actually make decisions (the contradiction usually takes the form of agents' choices in the Ellsberg paradox). As a consequence, economists have developed models of choice under uncertainty that seek to generalize the basic expected utility model. The resulting models are more general and therefore more flexible, and more prone to overfitting. The purpose of our paper is to understand this added flexibility better. We focus on the classical expected utility (EU) model, and its two most important generalizations: Choquet expected utility (CEU) and Max-min Expected Utility (MEU). Our setting involves an analyst whose task is to estimate or learn an agent's preference based on data available on the agent's choices. A model of preferences is PAC learnable if the analyst can construct a learning rule to precisely learn the agent's preference with enough data. When a model is not learnable we interpret it as the model being susceptible to overfitting. PAC learnability is known to be characterized by the model's VC dimension: thus our paper takes the form of a study of the VC dimension of economic models of choice under uncertainty. We show that EU and CEU have finite VC dimension, and are consequently learnable. Morever, the sample complexity of the former is linear, and of the latter is exponential, in the number of states of uncertainty. The MEU model is learnable when there are two states but is not learnable when there are at least three states, in which case the VC dimension is infinite. Our results also exhibit a close relationship between learnability and the underlying axioms which characterise the model.
We interpret the problem of updating beliefs as a choice problem (selecting a posterior from a set of admissible posteriors) with a reference point (prior). We use AGM belief revision to define the support of admissible posteriors after arrival of information, which applies also to zero probability events. We study two classes of updating rules for probabilities : 1) "lexicographic" updating rules where posteriors are given by a lexicographic probability system 2) "minimum distance" updating rules which select the posterior closest to the prior by some metric. We show that an updating rule is lexicographic if and only if it is Bayesian, AGM-consistent and satisfies a weak form of path independence. While not all lexicographic updating rules have a minimum distance representation, we study a sub-class of lexicographic rules, which we call "support-dependent" rules, which admit a minimum distance representation. Finally, we apply our approach to the problem of updating preferences.
We study the degree of falsifiability of theories of choice. A theory is easy to falsify if relatively small data sets are enough to guarantee that the theory can be falsified: the Vapnik–Chervonenkis (VC) dimension of a theory is the largest sample size for which the theory is “never falsifiable.” VC dimension is motivated strategically. We consider a model with a strategic proponent of a theory and a skeptical consumer, or user, of theories. The former presents experimental evidence in favor of the theory; the latter may doubt whether the experiment could ever have falsified the theory. We focus on decision‐making under uncertainty, considering the central models of expected utility, Choquet expected utility, and max–min expected utility models. We show that expected utility has VC dimension that grows linearly with the number of states, while that of Choquet expected utility grows exponentially. The max–min expected utility model has infinite VC dimension when there are at least three states of the world. In consequence, expected utility is easily falsified, while the more flexible Choquet and max–min expected utility are hard to falsify. Finally, as VC dimension and statistical estimation are related, we study the implications of our results for machine learning approaches to preference recovery.
This paper provides an exact characterisation of the zeroes of the Riemann zeta function. The characterisation is based on a theorem about random vectors, which says that under some conditions, if a vector is always in the convex hull of the conditional expectations corresponding to any two mutually exclusive and exhaustiveevents, then the unconditional expectation of the random vector is equal to that vector.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.