Neuroimaging studies of decision-making have generally related neural activity to objective measures (such as reward magnitude, probability or delay), despite choice preferences being subjective. However, economic theories posit that decision-makers behave as though different options have different subjective values. Here we use functional magnetic resonance imaging to show that neural activity in several brain regions--particularly the ventral striatum, medial prefrontal cortex and posterior cingulate cortex--tracks the revealed subjective value of delayed monetary rewards. This similarity provides unambiguous evidence that the subjective value of potential rewards is explicitly represented in the human brain.
How do humans make choices between different types of rewards? Economists have long argued on theoretical grounds that humans typically make these choices as if the values of the options they consider have been mapped to a single common scale for comparison. Neuroimaging studies in humans have recently begun to suggest the existence of a small group of specific brain sites that appear to encode the subjective values of different types of rewards on a neural common scale, almost exactly as predicted by theory. We have conducted a meta analysis using data from thirteen different functional magnetic resonance imaging studies published in recent years and we show that the principle brain area associated with this common representation is a subregion of the ventromedial prefrontal cortex (vmPFC)/orbitofrontal cortex (OFC). The data available today suggest that this common valuation path is a core system that participates in day-to-day decision making suggesting both a neurobiological foundation for standard economic theory and a tool for measuring preferences neurobiologically. Perhaps even more exciting is the possibility that our emerging understanding of the neural mechanisms for valuation and choice may provide fundamental insights into pathological choice behaviors like addiction, obesity and gambling.
A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.
We studied the choice behavior of 2 monkeys in a discrete-trial task with reinforcement contingencies similar to those Herrnstein (1961) used when he described the matching law. In each session, the monkeys experienced blocks of discrete trials at different relative-reinforcer frequencies or magnitudes with unsignalled transitions between the blocks. Steady-state data following adjustment to each transition were well characterized by the generalized matching law; response ratios undermatched reinforcer frequency ratios but matched reinforcer magnitude ratios. We modelled response-by-response behavior with linear models that used past reinforcers as well as past choices to predict the monkeys' choices on each trial. We found that more recently obtained reinforcers more strongly influenced choice behavior. Perhaps surprisingly, we also found that the monkeys' actions were influenced by the pattern of their own past choices. It was necessary to incorporate both past reinforcers and past choices in order to accurately capture steady-state behavior as well as the fluctuations during block transitions and the response-by-response patterns of behavior. Our results suggest that simple reinforcement learning models must account for the effects of past choices to accurately characterize behavior in this task, and that models with these properties provide a conceptual tool for studying how both past reinforcers and past choices are integrated by the neural systems that generate behavior.
Risk and ambiguity are two conditions in which the consequences of possible outcomes are not certain. Under risk, the probabilities of different outcomes can be estimated, whereas under ambiguity, even these probabilities are not known. Although most people exhibit at least some aversion to both risk and ambiguity, the degree of these aversions is largely uncorrelated across subjects, suggesting that risk aversion and ambiguity aversion are distinct phenomena. Previous studies have shown differences in brain activations for risky and ambiguous choices and have identified neural mechanisms that may mediate transitions from conditions of ambiguity to conditions of risk. Unknown, however, is whether the value of risky and ambiguous options is necessarily represented by two distinct systems or whether a common mechanism can be identified. To answer this question, we compared the neural representation of subjective value under risk and ambiguity. fMRI was used to track brain activation while subjects made choices regarding options that varied systematically in the amount of money offered and in either the probability of obtaining that amount or the level of ambiguity around that probability. A common system, consisting of at least the striatum and the medial prefrontal cortex, was found to represent subjective value under both conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.