Numerous experiments have recently sought to identify neural signals associated with the subjective value (SV) of choice alternatives. Theoretically, SV assessment is an intermediate computational step during decision making, in which alternatives are placed on a common scale to facilitate value-maximizing choice. Here we present a quantitative, coordinate-based meta-analysis of 206 published fMRI studies investigating neural correlates of SV. Our results identify two general patterns of SV-correlated brain responses. In one set of regions, both positive and negative effects of SV on BOLD are reported at above-chance rates across the literature. Areas exhibiting this pattern include anterior insula, dorsomedial prefrontal cortex, dorsal and posterior striatum, and thalamus. The mixture of positive and negative effects potentially reflects an underlying U-shaped function, indicative of signal related to arousal or salience. In a second set of areas, including ventromedial prefrontal cortex and anterior ventral striatum, positive effects predominate. Positive effects in the latter regions are seen both when a decision is confronted and when an outcome is delivered, as well as for both monetary and primary rewards. These regions appear to constitute a "valuation system," carrying a domain-general SV signal and potentially contributing to value-based decision making.
Behavioral and economic theories have long maintained that actions are chosen so as to minimize demands for exertion or work, a principle sometimes referred to as the "law of less work." The data supporting this idea pertain almost entirely to demands for physical effort. However, the same minimization principle has often been assumed also to apply to cognitive demand. We set out to evaluate the validity of this assumption. In six behavioral experiments, participants chose freely between courses of action associated with different levels of demand for controlled information processing. Together, the results of these experiments revealed a bias in favor of the less demanding course of action. The bias was obtained across a range of choice settings and demand manipulations, and was not wholly attributable to strategic avoidance of errors, minimization of time on task, or maximization of the rate of goal achievement. Remarkably, the effect also did not depend on awareness of the demand manipulation. Consistent with a motivational account, avoidance of demand displayed sensitivity to task incentives and co-varied with individual differences in the efficacy of executive control. The findings reported, together with convergent neuroscientific evidence, lend support to the idea that anticipated cognitive demand plays a significant role in behavioral decisionmaking.
Summary Maintaining accurate beliefs in a changing environment requires dynamically adapting the rate at which one learns from new experiences. Beliefs should be stable in the face of noisy data, but malleable in periods of change or uncertainty. Here we used computational modeling, psychophysics and fMRI to show that adaptive learning is not a unitary phenomenon in the brain. Rather, it can be decomposed into three computationally and neuroanatomically distinct factors that were evident in human subjects performing a spatial-prediction task: (1) surprise-driven belief updating, related to BOLD activity in visual cortex; (2) uncertainty-driven belief updating, related to anterior prefrontal and parietal activity; and (3) reward-driven belief updating, a context-inappropriate behavioral tendency related to activity in ventral striatum. These distinct factors converged in a core system governing adaptive learning. This system, which included dorsomedial frontal cortex, responded to all three factors and predicted belief updating both across trials and across individuals.
A great deal of behavioral and economic research suggests that the value attached to a reward stands in inverse relation to the amount of effort required to obtain it, a principle known as effort discounting. In the present article, we present the first direct evidence for a neural analogue of effort discounting. We used fMRI to measure neural responses to monetary rewards in the human nucleus accumbens (NAcc), a structure previously demonstrated to encode reference-dependent reward information. The magnitude of accumbens activation was found to vary with both reward outcome and the degree of mental effort demanded to obtain individual rewards. For a fixed level of reward, the NAcc was less strongly activated following a high-demand for effort than following a low demand. The magnitude of this effect was noted to correlate with preceding activation in the dorsal anterior cingulate cortex, a region that has been proposed to monitor information-processing demands and to mediate in the subjective experience of effort.
SummaryData analysis workflows in many scientific domains have become increasingly complex and flexible. To assess the impact of this flexibility on functional magnetic resonance imaging (fMRI) results, the same dataset was independently analyzed by 70 teams, testing nine ex-ante hypotheses. The flexibility of analytic approaches is exemplified by the fact that no two teams chose identical workflows to analyze the data. This flexibility resulted in sizeable variation in hypothesis test results, even for teams whose statistical maps were highly correlated at intermediate stages of their analysis pipeline. Variation in reported results was related to several aspects of analysis methodology. Importantly, meta-analytic approaches that aggregated information across teams yielded significant consensus in activated regions across teams. Furthermore, prediction markets of researchers in the field revealed an overestimation of the likelihood of significant findings, even by researchers with direct knowledge of the dataset. Our findings show that analytic flexibility can have substantial effects on scientific conclusions, and demonstrate factors related to variability in fMRI. The results emphasize the importance of validating and sharing complex analysis workflows, and demonstrate the need for multiple analyses of the same data. Potential approaches to mitigate issues related to analytical variability are discussed.
Human choice behavior takes account of internal decision costs: people show a tendency to avoid making decisions in ways that are computationally demanding and subjectively effortful. Here, we investigate neural processes underlying the registration of decision costs. We report two functional MRI experiments that implicate lateral prefrontal cortex (LPFC) in this function. In Experiment 1, LPFC activity correlated positively with a self-report measure of costs as this measure varied over blocks of simple decisions. In Experiment 2, LPFC activity also correlated with individual differences in effortbased choice, taking on higher levels in subjects with a strong tendency to avoid cognitively demanding decisions. These relationships persisted even when effects of reaction time and error were partialled out, linking LPFC activity to subjectively experienced costs and not merely to response accuracy or time on task. In contrast to LPFC, dorsomedial frontal cortex-an area widely implicated in performance monitoring-showed no relationship to decision costs independent of overt performance. Previous work has implicated LPFC in executive control. Our results thus imply that costs may be registered based on the degree to which control mechanisms are recruited during decision-making. H uman choice behavior has been held to be subjectively rational, or, "rational, given the perceptual and evaluational premises of the subject (1)." One key subjective premise, according to influential rational accounts (2-4), is that intensive information processing can carry an internal cost. Accordingly, "better decisions-decisions closer to the optimum, as computed from the point of view of the experimenter/theorist-require increased cognitive and response effort which is disutilitarian (2)." On this view, decisionmakers balance a motive to maximize gains with a motive to minimize decision costs. The concept of decision costs helps explain such behavioral phenomena as effort-accuracy tradeoffs (3, 5), reliance on fast and frugal heuristics (6), failure to consider all available alternatives (7), effort discounting (8), the use of stereotypes (9), and salutary effects of monetary incentives (10, 11). Amplified decision costs might play a role in clinical depression (12) and chronic fatigue syndrome (13). This idea is related to the view that decision-making consumes a limited resource (14), and, more generally, that humans act as cognitive misers (15).The neural mechanisms that underlie the registration of decision costs have never been directly investigated. We hypothesized that costs might be evaluated based on the degree of engagement of brain regions subserving executive control; these specifically include lateral prefrontal cortex (LPFC) and dorsomedial frontal cortex (DMFC) (16-19). Our hypothesis finds support in existing evidence that decision makers prefer to minimize demands for working memory (20), task set configuration (21), and conflict resolution (22-24), all hallmark functions of the executive control system. We focused our ...
Human behavior displays hierarchical structure: Simple actions cohere into subtask sequences, which work together to accomplish overall task goals. Although the neural substrates of such hierarchy have been the target of increasing research, they remain poorly understood. We propose that the computations supporting hierarchical behavior may relate to those in hierarchical reinforcement learning (HRL), a machine learning framework that extends reinforcement learning mechanisms into hierarchical domains. To test this, we leveraged a distinctive prediction arising from HRL. In ordinary reinforcement learning, reward prediction errors are computed when there is an unanticipated change in the prospects for accomplishing overall task goals. HRL entails that prediction errors should also occur in relation to task subgoals. In three neuroimaging studies, we observed neural responses consistent with such subgoal-related reward prediction errors, within structures previously implicated in reinforcement learning. The results reported support the relevance of HRL to the neural processes underlying hierarchical behavior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.