The St. Petersburg paradox is a centuries-old philosophical puzzle concerning a lottery with infinite expected payoff for which people are only willing to pay a small amount to play. Despite many attempts and several proposals, no generally accepted resolution is yet at hand. In this work, we present the first resource-rational, process-level explanation of this paradox, demonstrating that it can be accounted for by a variant of normative expected utility valuation which acknowledges cognitive limitations. Specifically, we show that Nobandegani et al.'s (2018) metacognitively rational model, sample-based expected utility (SbEU), can account for major experimental findings on this paradox. Crucially, our resolution is consistent with two empirically well-supported assumptions: (a) People use only a few samples in probabilistic judgments and decision-making, and (b) people tend to overestimate the probability of extreme events in their judgment. Our work seeks to understand the St. Petersburg gamble as a particularly risky gamble whose process-level explanation is consistent with a broader process-level model of human decision-making under risk.
The St. Petersburg paradox is a centuries-old puzzle concerning a lottery with infinite expected payoff on which people are only willing to pay a small amount to play. Despite many attempts and several proposals, no generally-accepted resolution is yet at hand. In a recent paper, we show that this paradox can be understood in terms of the mind optimally using its limited computational resources (Nobandegani et al. 2019). Specifically, we show that the St. Petersburg paradox can be accounted for by a variant of normative expected-utility valuation which acknowledges cognitive limitations: sample-based expected utility (Nobandegani et al. 2018). SbEU provides a unified, algorithmic explanation of major experimental findings on this paradox. We conclude by discussing the implications of our work for algorithmically understanding human cognition and for developing human-like artificial intelligence.
Recent experiments reveal that 6- to 12-month-old infants can learn probabilities and reason with them. In this work, we present a novel computational system called Neural Probability Learner and Sampler (NPLS) that learns and reasons with probabilities, providing a computationally sufficient mechanism to explain infant probabilistic learning and inference. In 24 computer simulations, NPLS shows how probability distributions can emerge naturally from neural-network learning of event sequences, providing a novel explanation of infant probabilistic learning and reasoning. Three mathematical proofs show how and why NPLS simulates the infant results so accurately. The results are situated in relation to seven other active research lines. This work provides an effective way to integrate Bayesian and neural-network approaches to cognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.