The speed–accuracy trade-off (SAT) is ubiquitous in decision tasks. While the neural mechanisms underlying decisions are generally well characterized, the application of decision-theoretic methods to the SAT has been difficult to reconcile with experimental data suggesting that decision thresholds are inflexible. Using a network model of a cortical decision circuit, we demonstrate the SAT in a manner consistent with neural and behavioral data and with mathematical models that optimize speed and accuracy with respect to one another. In simulations of a reaction time task, we modulate the gain of the network with a signal encoding the urgency to respond. As the urgency signal builds up, the network progresses through a series of processing stages supporting noise filtering, integration of evidence, amplification of integrated evidence, and choice selection. Analysis of the network's dynamics formally characterizes this progression. Slower buildup of urgency increases accuracy by slowing down the progression. Faster buildup has the opposite effect. Because the network always progresses through the same stages, decision-selective firing rates are stereotyped at decision time.
Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This phenomenon is referred to as the speed-accuracy trade-off (SAT). Behavioral studies of the SAT have a long history, and the data from these studies are well characterized within the framework of bounded integration. According to this framework, decision makers accumulate noisy evidence until the running total for one of the alternatives reaches a bound. Lower and higher bounds favor speed and accuracy respectively, each at the expense of the other. Studies addressing the neural implementation of these computations are a recent development in neuroscience. In this review, we describe the experimental and theoretical evidence provided by these studies. We structure the review according to the framework of bounded integration, describing evidence for (1) the modulation of the encoding of evidence under conditions favoring speed or accuracy, (2) the modulation of the integration of encoded evidence, and (3) the modulation of the amount of integrated evidence sufficient to make a choice. We discuss commonalities and differences between the proposed neural mechanisms, some of their assumptions and simplifications, and open questions for future work. We close by offering a unifying hypothesis on the present state of play in this nascent research field.
Our actions take place in space and time, but despite the role of time in decision theory and the growing acknowledgement that the encoding of time is crucial to behaviour, few studies have considered the interactions between neural codes for objects in space and for elapsed time during perceptual decisions. The speed-accuracy trade-off (SAT) provides a window into spatiotemporal interactions. Our hypothesis is that temporal coding determines the rate at which spatial evidence is integrated, controlling the SAT by gain modulation. Here, we propose that local cortical circuits are inherently suited to the relevant spatial and temporal coding. In simulations of an interval estimation task, we use a generic local-circuit model to encode time by ‘climbing’ activity, seen in cortex during tasks with a timing requirement. The model is a network of simulated pyramidal cells and inhibitory interneurons, connected by conductance synapses. A simple learning rule enables the network to quickly produce new interval estimates, which show signature characteristics of estimates by experimental subjects. Analysis of network dynamics formally characterizes this generic, local-circuit timing mechanism. In simulations of a perceptual decision task, we couple two such networks. Network function is determined only by spatial selectivity and NMDA receptor conductance strength; all other parameters are identical. To trade speed and accuracy, the timing network simply learns longer or shorter intervals, driving the rate of downstream decision processing by spatially non-selective input, an established form of gain modulation. Like the timing network's interval estimates, decision times show signature characteristics of those by experimental subjects. Overall, we propose, demonstrate and analyse a generic mechanism for timing, a generic mechanism for modulation of decision processing by temporal codes, and we make predictions for experimental verification.
Decisions are faster and less accurate when conditions favor speed, and are slower and more accurate when they favor accuracy. This speed-accuracy trade-off (SAT) can be explained by the principles of bounded integration, where noisy evidence is integrated until it reaches a bound. Higher bounds reduce the impact of noise by increasing integration times, supporting higher accuracy (vice versa for speed). These computations are hypothesized to be implemented by feedback inhibition between neural populations selective for the decision alternatives, each of which corresponds to an attractor in the space of network states. Since decision-correlated neural activity typically reaches a fixed rate at the time of commitment to a choice, it has been hypothesized that the neural implementation of the bound is fixed, and that the SAT is supported by a common input to the populations integrating evidence. According to this hypothesis, a stronger common input reduces the difference between a baseline firing rate and a threshold rate for enacting a choice. In simulations of a two-choice decision task, we use a reduced version of a biophysically-based network model (Wong and Wang, 2006) to show that a common input can control the SAT, but that changes to the threshold-baseline difference are epiphenomenal. Rather, the SAT is controlled by changes to network dynamics. A stronger common input decreases the model's effective time constant of integration and changes the shape of the attractor landscape, so the initial state is in a more error-prone position. Thus, a stronger common input reduces decision time and lowers accuracy. The change in dynamics also renders firing rates higher under speed conditions at the time that an ideal observer can make a decision from network activity. The difference between this rate and the baseline rate is actually greater under speed conditions than accuracy conditions, suggesting that the bound is not implemented by firing rates per se.
We present two weight- and spike-time dependent synaptic plasticity rules consistent with the physiological data of Bi and Poo (J Neurosci 18:10464-10472, 1998). One rule assumes synaptic saturation, while the other is scale free. We extend previous analyses of the asymptotic consequences of weight-dependent STDP to the case of strongly correlated pre- and post-synaptic spiking, more closely resembling associative learning. We further provide a general formula for the contribution of any number of spikes to synaptic drift. Asymptotic weights are shown to principally depend on the correlation and rate of pre- and post-synaptic activity, decreasing with increasing rate under correlated activity, and increasing with rate under uncorrelated activity. Spike train statistics reveal a quantitative effect only in the pre-asymptotic regime, and we provide a new interpretation of the relation between BCM and STDP data.
General anesthetics are routinely used to induce unconsciousness, and much is known about their effects on receptor function and single neuron activity. Much less is known about how these local effects are manifest at the whole-brain level nor how they influence network dynamics, especially past the point of induced unconsciousness. Using resting-state functional magnetic resonance imaging (fMRI) with nonhuman primates, we investigated the dose-dependent effects of anesthesia on whole-brain temporal modular structure, following loss of consciousness. We found that higher isoflurane dose was associated with an increase in both the number and isolation of whole-brain modules, as well as an increase in the uncoordinated movement of brain regions between those modules. Conversely, we found that higher dose was associated with a decrease in the cohesive movement of brain regions between modules, as well as a decrease in the proportion of modules in which brain regions participated. Moreover, higher dose was associated with a decrease in the overall integrity of networks derived from the temporal modules, with the exception of a single, sensory-motor network. Together, these findings suggest that anesthesia-induced unconsciousness results from the hierarchical fragmentation of dynamic whole-brain network structure, leading to the discoordination of temporal interactions between cortical modules.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.