The subjective well-being or happiness of individuals is an important metric for societies. Although happiness is influenced by life circumstances and population demographics such as wealth, we know little about how the cumulative influence of daily life events are aggregated into subjective feelings. Using computational modeling, we show that emotional reactivity in the form of momentary happiness in response to outcomes of a probabilistic reward task is explained not by current task earnings, but by the combined influence of recent reward expectations and prediction errors arising from those expectations. The robustness of this account was evident in a large-scale replication involving 18,420 participants. Using functional MRI, we show that the very same influences account for task-dependent striatal activity in a manner akin to the influences underpinning changes in happiness.reward prediction error | dopamine | striatum | insula P hilosophers from Aristotle to Bentham have argued for the central importance of subjective well-being in human conscious experience. Bentham suggested that "it is the greatest happiness of the greatest number that is the measure of right and wrong" (1). This dictum informs the policies of many nations who deploy population measures of well-being in pursuit of this goal (2). However, happiness is a difficult concept to define (3-5) and the complexity of the relationship between happiness and wealth (6-8) suggests that there is no simple happiness-reward relationship. Here, we provide an analysis of one of the foundations on which happiness is assumed to be built, namely the subjective response to rewards. We focus on rewards that are external quantifiable objects (e.g., money) that might elicit affective and motivational responses (9).To address the relationship between reward and ongoing happiness, it is essential to be able to measure happiness reliably and to influence it on an appropriate time scale. Experience sampling is an established methodology that measures phenomenological states as subjects engage in daily life. By repeatedly asking participants to report on their subjective emotional state, these feelings can be related to antecedent life events including rewards (10-13). Momentary measures of happiness or hedonic well-being reveal emotional reactivity to recent events and thus differ from overall life satisfaction, although it is possible that life satisfaction relates to the temporal integral of momentary happiness over a longer time scale (12).Here we asked subjects to perform a probabilistic reward task in which they chose between certain and risky monetary options while being asked after every few trials to report, "How happy are you right now?" We expected this task to elicit rapid changes in affective state, and we therefore used a more frequent variant of experience sampling adapted to laboratory and functional MRI (fMRI) settings. Importantly, the experiential sampling question makes no reference to past events and concerns the overall subjective emotional stat...
Experiences affect mood, which in turn affects subsequent experiences. Recent studies suggest two specific principles. First, mood depends on how recent reward outcomes differ from expectations. Second, mood biases the way we perceive outcomes (e.g., rewards), and this bias affects learning about those outcomes. We propose that this two-way interaction serves to mitigate inefficiencies in the application of reinforcement learning to real-world problems. Specifically, we propose that mood represents the overall momentum of recent outcomes, and its biasing influence on the perception of outcomes ‘corrects’ learning to account for environmental dependencies. We describe potential dysfunctions of this adaptive mechanism that might contribute to the symptoms of mood disorders.
The neuromodulator dopamine has a well established role in reporting appetitive prediction errors that are widely considered in terms of learning. However, across a wide variety of contexts, both phasic and tonic aspects of dopamine are likely to exert more immediate effects that have been less well characterized. Of particular interest is dopamine's influence on economic risk taking and on subjective well-being, a quantity known to be substantially affected by prediction errors resulting from the outcomes of risky choices. By boosting dopamine levels using levodopa (L-DOPA) as human subjects made economic decisions and repeatedly reported their momentary happiness, we show here an effect on both choices and happiness. Boosting dopamine levels increased the number of risky options chosen in trials involving potential gains but not trials involving potential losses. This effect could be better captured as increased Pavlovian approach in an approach-avoidance decision model than as a change in risk preferences within an established prospect theory model. Boosting dopamine also increased happiness resulting from some rewards. Our findings thus identify specific novel influences of dopamine on decision making and emotion that are distinct from its established role in learning.
Making appropriate choices often requires the ability to learn the value of available options from experience. Parkinson’s disease is characterized by a loss of dopamine neurons in the substantia nigra, neurons hypothesized to play a role in reinforcement learning. Although previous studies have shown that Parkinson’s patients are impaired in tasks involving learning from feedback, they have not directly tested the widely held hypothesis that dopamine neuron activity specifically encodes the reward prediction error signal used in reinforcement learning models. To test a key prediction of this hypothesis, we fit choice behavior from a dynamic foraging task with reinforcement learning models and show that treatment with dopaminergic drugs alters choice behavior in a manner consistent with the theory. More specifically, we found that dopaminergic drugs selectively modulate learning from positive outcomes. We observed no effect of dopaminergic drugs on learning from negative outcomes. We also found a novel dopamine-dependent effect on decision making that is not accounted for by reinforcement learning models: perseveration in choice, independent of reward history, increases with Parkinson’s disease and decreases with dopamine therapy.
The human settlement of the Pacific Islands represents one of the most recent major migration events of mankind. Polynesians originated in Asia according to linguistic evidence or in Melanesia according to archaeological evidence. To shed light on the genetic origins of Polynesians, we investigated over 400 Polynesians from 8 island groups, in comparison with over 900 individuals from potential parental populations of Melanesia, Southeast and East Asia, and Australia, by means of Y chromosome (NRY) and mitochondrial DNA (mtDNA) markers. Overall, we classified 94.1% of Polynesian Y chromosomes and 99.8% of Polynesian mtDNAs as of either Melanesian (NRY-DNA: 65.8%, mtDNA: 6%) or Asian (NRY-DNA: 28.3%, mtDNA: 93.8%) origin, suggesting a dual genetic origin of Polynesians in agreement with the "Slow Boat" hypothesis. Our data suggest a pronounced admixture bias in Polynesians toward more Melanesian men than women, perhaps as a result of matrilocal residence in the ancestral Polynesian society. Although dating methods are consistent with somewhat similar entries of NRY/mtDNA haplogroups into Polynesia, haplotype sharing suggests an earlier appearance of Melanesian haplogroups than those from Asia. Surprisingly, we identified gradients in the frequency distribution of some NRY/mtDNA haplogroups across Polynesia and a gradual west-to-east decrease of overall NRY/mtDNA diversity, not only providing evidence for a west-to-east direction of Polynesian settlements but also suggesting that Pacific voyaging was regular rather than haphazard. We also demonstrate that Fiji played a pivotal role in the history of Polynesia: humans probably first migrated to Fiji, and subsequent settlement of Polynesia probably came from Fiji.
The effects of stress are frequently studied, yet its proximal causes remain unclear. Here we demonstrate that subjective estimates of uncertainty predict the dynamics of subjective and physiological stress responses. Subjects learned a probabilistic mapping between visual stimuli and electric shocks. Salivary cortisol confirmed that our stressor elicited changes in endocrine activity. Using a hierarchical Bayesian learning model, we quantified the relationship between the different forms of subjective task uncertainty and acute stress responses. Subjective stress, pupil diameter and skin conductance all tracked the evolution of irreducible uncertainty. We observed a coupling between emotional and somatic state, with subjective and physiological tuning to uncertainty tightly correlated. Furthermore, the uncertainty tuning of subjective and physiological stress predicted individual task performance, consistent with an adaptive role for stress in learning under uncertain threat. Our finding that stress responses are tuned to environmental uncertainty provides new insight into their generation and likely adaptive function.
Making predictions about the rewards associated with environmental stimuli and updating those predictions through feedback is an essential aspect of adaptive behavior. Theorists have argued that dopamine encodes a reward prediction error (RPE) signal that is used in such a reinforcement learning process. Recent work with fMRI has demonstrated that the BOLD signal in dopaminergic target areas meets both necessary and sufficient conditions of an axiomatic model of the RPE hypothesis. However, there has been no direct evidence that dopamine release itself also meets necessary and sufficient criteria for encoding an RPE signal. Further, the fact that dopamine neurons have low tonic firing rates that yield a limited dynamic range for encoding negative RPEs has led to significant debate about whether positive and negative prediction errors are encoded on a similar scale. To address both of these issues, we used fast-scan cyclic voltammetry to measure reward-evoked dopamine release at carbon fiber electrodes chronically implanted in the nucleus accumbens core of rats trained on a probabilistic decision-making task. We demonstrate that dopamine concentrations transmit a bidirectional RPE signal with symmetrical encoding of positive and negative RPEs. Our findings strengthen the case that changes in dopamine concentration alone are sufficient to encode the full range of RPEs necessary for reinforcement learning.
Neuroimaging studies typically identify neural activity correlated with the predictions of highly parameterized models, like the many reward prediction error (RPE) models used to study reinforcement learning. Identified brain areas might encode RPEs or, alternatively, only have activity correlated with RPE model predictions. Here, we use an alternate axiomatic approach rooted in economic theory to formally test the entire class of RPE models on neural data. We show that measurements of human neural activity from the striatum, medial prefrontal cortex, amygdala, and posterior cingulate cortex satisfy necessary and sufficient conditions for the entire class of RPE models. However, activity measured from the anterior insula falsifies the axiomatic model, and therefore no RPE model can account for measured activity. Further analysis suggests the anterior insula might instead encode something related to the salience of an outcome. As cognitive neuroscience matures and models proliferate, formal approaches of this kind that assess entire model classes rather than specific model exemplars may take on increased significance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.