To make economic choices between goods, the brain needs to compute representations of their values. A great deal of research has been performed to determine the neural correlates of value representations in the human brain. However, it is still unknown whether there exists a region of the brain that commonly encodes decision values for different types of goods, or if, in contrast, the values of different types of goods are represented in distinct brain regions. We addressed this question by scanning subjects with functional magnetic resonance imaging while they made real purchasing decisions among different categories of goods (food, nonfood consumables, and monetary gambles). We found activity in a key brain region previously implicated in encoding goal-values: the ventromedial prefrontal cortex (vmPFC) was correlated with the subjects' value for each category of good. Moreover, we found a single area in vmPFC to be correlated with the subjects' valuations for all categories of goods. Our results provide evidence that the brain encodes a "common currency" that allows for a shared valuation for different categories of goods.
SummaryEmployers often make payment contingent on performance in order to motivate workers. We used fMRI with a novel incentivized skill task to examine the neural processes underlying behavioral responses to performance-based pay. We found that individuals' performance increased with increasing incentives; however, very high incentive levels led to the paradoxical consequence of worse performance. Between initial incentive presentation and task execution, striatal activity rapidly switched between activation and deactivation in response to increasing incentives. Critically, decrements in performance and striatal deactivations were directly predicted by an independent measure of behavioral loss aversion. These results suggest that incentives associated with successful task performance are initially encoded as a potential gain; however, when actually performing a task, individuals encode the potential loss that would arise from failure.
The midbrain lies deep within the brain and has an important role in reward, motivation, movement and the pathophysiology of various neuropsychiatric disorders such as Parkinson's disease, schizophrenia, depression and addiction. To date, the primary means of acting on this region has been with pharmacological interventions or implanted electrodes. Here we introduce a new noninvasive brain stimulation technique that exploits the highly interconnected nature of the midbrain and prefrontal cortex to stimulate deep brain regions. Using transcranial direct current stimulation (tDCS) of the prefrontal cortex, we were able to remotely activate the interconnected midbrain and cause increases in participants' appraisals of facial attractiveness. Participants with more enhanced prefrontal/midbrain connectivity following stimulation exhibited greater increases in attractiveness ratings. These results illustrate that noninvasive direct stimulation of prefrontal cortex can induce neural activity in the distally connected midbrain, which directly effects behavior. Furthermore, these results suggest that this tDCS protocol could provide a promising approach to modulate midbrain functions that are disrupted in neuropsychiatric disorders.
To manipulate an object, we must simultaneously control the contact forces exerted on the object and the movements of our hand. Two alternative views for manipulation have been proposed: one in which motions and contact forces are represented and controlled by separate neural processes, and one in which motions and forces are controlled jointly, by a single process. To evaluate these alternatives, we designed three tasks in which subjects maintained a specified contact force while their hand was moved by a robotic manipulandum. The prescribed contact force and hand motions were selected in each task to induce the subject to attain one of three goals: (1) exerting a regulated contact force, (2) tracking the motion of the manipulandum, and (3) attaining both force and motion goals concurrently. By comparing subjects' performances in these three tasks, we found that behavior was captured by the summed actions of two independent control systems: one applying the desired force, and the other guiding the hand along the predicted path of the manipulandum. Furthermore, the application of transcranial magnetic stimulation impulses to the posterior parietal cortex selectively disrupted the control of motion but did not affect the regulation of static contact force. Together, these findings are consistent with the view that manipulation of objects is performed by independent brain control of hand motions and interaction forces.
The perceived effort level of an action shapes everyday decisions. Despite the importance of these perceptions for decision-making, the behavioral and neural representations of the subjective cost of effort are not well understood. While a number of studies have implicated anterior cingulate cortex (ACC) in decisions about effort/reward trade-offs, none have experimentally isolated effort valuation from reward and choice difficulty, a function that is commonly ascribed to this region. We used functional magnetic resonance imaging to monitor brain activity while human participants engaged in uncertain choices for prospective physical effort. Our task was designed to examine effort-based decision-making in the absence of reward and separated from choice difficulty—allowing us to investigate the brain’s role in effort valuation, independent of these other factors. Participants exhibited subjectivity in their decision-making, displaying increased sensitivity to changes in subjective effort as objective effort levels increased. Analysis of blood-oxygenation-level dependent activity revealed that the ventromedial prefrontal cortex (vmPFC) encoded the subjective valuation of prospective effort, and ACC activity was best described by choice difficulty. These results provide insight into the processes responsible for decision-making regarding effort, partly dissociating the roles of vmPFC and ACC in prospective valuation of effort and choice difficulty.
There is a nuanced interplay between the provision of monetary incentives and behavioral performance. Individuals' performance typically increases with increasing incentives only up to a point, after which larger incentives may result in decreases in performance, a phenomenon known as "choking." We investigated the influence of incentive framing on choking effects in humans: in one condition, participants performed a skilled motor task to obtain potential monetary gains; in another, participants performed the same task to avoid losing a monetary amount. In both the gain and loss frame, the degree of participants' behavioral loss aversion was correlated with their susceptibility to choking effects. However, the effects were markedly different in the gain and loss frames: individuals with higher loss aversion were susceptible to choking for large prospective gains and not susceptible to choking for large prospective losses, whereas individuals with low loss aversion choked for large prospective losses but not for large prospective gains. Activity in the ventral striatum was predictive of performance decrements in both the gain and loss frames. Moreover, a mediation analysis revealed that behavioral loss aversion hindered performance via the influence of ventral striatal activity on motor performance. Our findings indicate that the framing of an incentive has a profound effect on an individual's susceptibility to choking effects, which is contingent on their loss aversion. Furthermore, we demonstrate that the ventral striatum serves as an interface between incentive-driven motivation and instrumental action, regardless of whether incentives are framed in terms of potential losses or gains.
Chib, Vikram S., James L. Patton, Kevin M. Lynch, and Ferdinando A. Mussa-Ivaldi. Haptic identification of surfaces as fields of force. J Neurophysiol 95: 1068 -1077, 2006. First published October 5, 2005 doi:10.1152/jn.00610.2005. The ability to discriminate an object's shape and mechanical properties from touch is one of the most fundamental somatosensory functions. When exploring physical properties of an object, such as stiffness and curvature, humans probe the object's surface and obtain information from the many sensory receptors in their upper limbs. This sensory information is critical for the guidance of actions. We studied how humans acquire an internal representation of the shape and mechanical properties of surfaces and how this information affects the execution of trajectories over the surface. Experiments involved subjects executing trajectories while holding a planar manipulandum that renders planar virtual objects with variable shape and mechanical properties. Subjects were instructed to make reaching movements with the hand between points on the boundary of a curved virtual disk of varying stiffness and curvature. The results suggest two classifications of adaptive responses: force perturbations and object boundaries. In the first case, a rectilinear hand movement is enforced by opposing the interaction forces. In the second case, the trajectory conforms to the object boundary so as to reduce interaction forces. While this dichotomy is evident for very rigid and very soft objects, the likelihood of an object boundary classification depended, in a smooth and monotonic way, on the average force experienced during the initial movements. Furthermore, the observed response across a variety of stiffness values lead to a constant average interaction force after adaptation. This suggests that the nervous system may select from the two responses through a mechanism that attempts to establish a constant interaction force.
The objective of this technical advance is to permit in situ visualization of ultrasonographic images so that direct hand-eye coordination can be used during invasive procedures. A method is presented that merges the visual outer surface of a patient with a simultaneous ultrasonographic scan of the patient's interior. The method combines a flat-panel monitor with a half-silvered mirror such that the image on the monitor is reflected precisely at the proper location within the patient. The ultrasonographic image is superimposed in real time on the patient, merging with the operator's hands and any invasive tools in the field of view. Instead of looking away from the patient at an ultrasonographic monitor, the operator sees through skin and underlying tissue as if it were translucent. Two working prototypes have been constructed, demonstrating independence of viewer location and requiring no special apparatus to be worn by the operator. This method could enable needles and scalpels to be manipulated with direct hand-eye coordination under ultrasonographic guidance. Invasive tools would be visible up to where they enter the skin, permitting natural visual extrapolation into the ultrasonographic slice. Biopsy needles would no longer be restricted to lie in the plane of the ultrasonographic scan but could instead intersect it. These advances could lead to increased safety, ease, and reliability in certain invasive procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.