Abstract:This paper presents a new solution to the problems for orthodox decision theory posed by the Pasadena game and its relatives. I argue that a key question raised by consideration of these gambles is whether evaluative compositionality (as I term it) is a requirement of rationality: is the value that an ideally rational agent places on a gamble determined by the values that she places on its possible outcomes, together with their mode of composition into the gamble (i.e. the probabilities assigned to them)? The … Show more
“…We are thus agreeing withSeidenfeld et al (2009) andSmith (2014) in rejecting what Smith (2014: 465) calls 'the principle of compositionality': 'the value which a rational agent places [or at least should place] on a gamble is [solely] a function of the values which she places on the possible outcomes of the gamble, together with the probabilities assigned to those outcomes by the gamble'. W and SP1 assign the same probabilities to the same payoffs but are not equally valuable.…”
mentioning
confidence: 81%
“…For additional discussion of Easwaran's approach, seeFine (2008),Sprenger and Heesen (2011) andSmith (2014).4 Actually, only one of us (Vallentyne) finds Finite Weak Expectations compelling. The other(Lauwers) finds it compelling for one-stage lotteries but not for compound lotteries (e.g.…”
Abstract:We address the question, in decision theory, of how the value of risky options (gambles) should be assessed when they have no finite standard expected value, that is, where the sum of the probability-weighted payoffs is infinite or not well defined. We endorse, combine and extend (1) the proposal of Easwaran (2008) to evaluate options on the basis of their weak expected value, and (2) the proposal of Colyvan (2008) to rank options on the basis of their relative expected value.
“…We are thus agreeing withSeidenfeld et al (2009) andSmith (2014) in rejecting what Smith (2014: 465) calls 'the principle of compositionality': 'the value which a rational agent places [or at least should place] on a gamble is [solely] a function of the values which she places on the possible outcomes of the gamble, together with the probabilities assigned to those outcomes by the gamble'. W and SP1 assign the same probabilities to the same payoffs but are not equally valuable.…”
mentioning
confidence: 81%
“…For additional discussion of Easwaran's approach, seeFine (2008),Sprenger and Heesen (2011) andSmith (2014).4 Actually, only one of us (Vallentyne) finds Finite Weak Expectations compelling. The other(Lauwers) finds it compelling for one-stage lotteries but not for compound lotteries (e.g.…”
Abstract:We address the question, in decision theory, of how the value of risky options (gambles) should be assessed when they have no finite standard expected value, that is, where the sum of the probability-weighted payoffs is infinite or not well defined. We endorse, combine and extend (1) the proposal of Easwaran (2008) to evaluate options on the basis of their weak expected value, and (2) the proposal of Colyvan (2008) to rank options on the basis of their relative expected value.
“…Likewise, if a stochastically dominates a , then the area of the utility contour corresponding to a will 37. The proposal offered by Smith (2014) yields verdicts in an equally broad range of cases. But, like Hájek (2014), I find many of these verdicts implausible.…”
Section: Stochastic Equivalence and Stochastic Dominancementioning
confidence: 99%
“…So far, discussions of this issue have generally favored the first option. For example, Seidenfeld et al (2009), Smith (2014), and Lauwers and Vallentyne (2016) all suggest we should reject Stochastic Equivalence and Stochastic Dominance, and thus reject views which entail those principles, like Difference Minimizing Theory. I disagree.…”
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014b), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful.
“…(...) Restrict decision theory to finite state spaces (...) Restrict decision theory to bounded utility functions." Nicholas J.J. Smith (2014), notably, defends a hybrid of these:…”
We address the problem that gambles having undefined expectation pose for decision theory. Observing that to place a value on such a gamble exposes one to a finitary diachronic Dutch Book, we defend a variant of Mark Colyvan's "Relative Expected Utility Theory" (REUT), noting that it has the property of never preferring a gamble X to an identically distributed gamble Y . We demonstrate, however, that even REUT subscribers succomb to diachronic incoherence should they assign infinite expectation to a gamble they actually confront. In a final section, we use basic principles of anthropic reasoning (as formulated by Brandon Carter) to show why one needn't ever do so.We take utility to be linear with respect to currency, and in particular unbounded.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.