When experts are immersed in a task, do their brains prioritize task-related activity? Most efforts to understand neural activity during well-learned tasks focus on cognitive computations and taskrelated movements. We wondered whether task-performing animals explore a broader movement landscape, and how this impacts neural activity. We characterized movements using video and other sensors and measured neural activity using widefield and two-photon imaging. Cortex-wide activity was dominated by movements, especially uninstructed movements not required for the task. Some uninstructed movements were aligned to trial events. Accounting for them revealed that neurons with similar trial-averaged activity often reflected utterly different combinations of cognitive and movement variables. Other movements occurred idiosyncratically, accounting for trial-by-trial fluctuations that are often considered "noise". This held true throughout task-learning and for extracellular Neuropixels recordings that included subcortical areas. Our observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity.
Simple perceptual tasks have laid the groundwork for understanding the neurobiology of decision-making. Here, we examined this foundation to explain how decision-making circuitry adjusts in the face of a more difficult task. We measured behavioral and physiological responses of monkeys on a two- and four-choice direction-discrimination decision task. For both tasks, firing rates in the lateral intraparietal area appeared to reflect the accumulation of evidence for or against each choice. Evidence accumulation began at a lower firing rate for the four-choice task, but reached a common level by the end of the decision process. The larger excursion suggests that the subjects required more evidence before making a choice. Furthermore, on both tasks, we observed a time-dependent rise in firing rates that may impose a deadline for deciding. These physiological observations constitute an effective strategy for handling increased task difficulty. The differences appear to explain subjects' accuracy and reaction times.
Decision making often involves the accumulation of information over time, but acquiring information typically comes at a cost. Little is known about the cost incurred by animals and humans for acquiring additional information from sensory variables, due, for instance, to attentional efforts. Through a novel integration of diffusion models and dynamic programming, we were able to estimate the cost of making additional observations per unit of time from two monkeys and six humans in a reaction time random dot motion discrimination task. Surprisingly, we find that, the cost is neither zero nor constant over time, but for the animals and humans features a brief period in which it is constant but increases thereafter. In addition, we show that our theory accurately matches the observed reaction time distributions for each stimulus condition, the time-dependent choice accuracy both conditional on stimulus strength and independent of it, and choice accuracy and mean reaction times as a function of stimulus strength. The theory also correctly predicts that urgency signals in the brain should be independent of the difficulty, or stimulus strength, at each trial.
The posterior parietal cortex (PPC) receives diverse inputs and is involved in a dizzying array of behaviors. These multiple behaviors could rely on distinct categories of neurons specialized to represent particular variables or could rely on a single population of PPC neurons that is leveraged in different ways. To distinguish these possibilities, we evaluated rat PPC neurons recorded during multisensory decisions. Novel tests revealed that task parameters and temporal response features were distributed randomly across neurons, without evidence of categories. This suggests that PPC neurons constitute a dynamic network that is decoded according to the animal’s current needs. To test for an additional signature of a dynamic network, we compared moments when behavioral demands differ: decision and movement. Our novel state-space analysis revealed that the network explored different dimensions during decision and movement. These observations suggest that a single network of neurons can support the evolving behavioral demands of decision-making.
When making a decision, one must first accumulate evidence, often over time, and then select the appropriate action. Here, we present a neural model of decision making that can perform both evidence accumulation and action selection optimally. More specifically, we show that, given a Poisson-like distribution of spike counts, biological neural networks can accumulate evidence without loss of information through linear integration of neural activity, and can select the most likely action through attractor dynamics. This holds for arbitrary correlations, any tuning curves, continuous and discrete variables, and sensory evidence whose reliability varies over time. Our model predicts that the neurons in the lateral intraparietal cortex involved in evidence accumulation encode, on every trial, a probability distribution which predicts the animal’s performance. We present experimental evidence consistent with this prediction, and discuss other predictions applicable to more general settings.
Traditionally, insights into neural computation have been furnished by averaged firing rates from many stimulus repetitions or trials. We pursue an analysis of neural response variance to unveil neural computations that cannot be discerned from measures of average firing rate. We analyzed single-neuron recordings from the lateral intraparietal area (LIP), during a perceptual decision-making task. Spike count variance was divided into two components using the law of total variance for doubly stochastic processes: (i) variance of counts that would be produced by a stochastic point process with a given rate, and loosely (ii) the variance of the rates that would produce those counts (i.e., “conditional expectation”). The variance and correlation of the conditional expectation exposed several neural mechanisms: mixtures of firing rate states preceding the decision, accumulation of stochastic “evidence” during decision formation, and a stereotyped response at decision end. These analyses help to differentiate among several alternative decision-making models.
The study of perceptual decision-making offers insight into how the brain uses complex, sometimes ambiguous information to guide actions. Understanding the underlying processes and their neural bases requires that one pair recordings and manipulations of neural activity with rigorous psychophysics. Though this research has been traditionally performed in primates, it seems increasingly promising to pursue it at least partly in mice and rats. However, rigorous psychophysical methods are not yet as developed for these rodents as they are for primates. Here we give a brief overview of the sensory capabilities of rodents and of their cortical areas devoted to sensation and decision. We then review methods of psychophysics, focusing on the technical issues that arise in their implementation in rodents. These methods represent a rich set of challenges and opportunities.
We report a novel multisensory decision task designed to encourage subjects to combine information across both time and sensory modalities. We presented subjects, humans and rats, with multisensory event streams, consisting of a series of brief auditory and/or visual events. Subjects made judgments about whether the event rate of these streams was high or low. We have three main findings: First, we report that subjects can combine multisensory information over time to improve judgments about whether a fluctuating rate is high or low. Importantly, the improvement we observed was frequently close to, or better than, the statistically optimal prediction. Second, we found that subjects showed a clear multisensory enhancement both when the inputs in each modality were redundant and also when they provided independent evidence about the rate. This latter finding suggests a model where event rates are estimated separately for each modality and fused at a later stage. Finally, because a similar multisensory enhancement was observed in both humans and rats, we conclude that the ability to optimally exploit sequentially presented multisensory information is not restricted to a particular species.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.