The prefrontal cortex is centrally involved in a wide range of cognitive functions and their impairment in psychiatric disorders. Yet, the computational principles that govern the dynamics of prefrontal neural networks, and link their physiological, biochemical and anatomical properties to cognitive functions, are not well understood. Computational models can help to bridge the gap between these different levels of description, provided they are sufficiently constrained by experimental data and capable of predicting key properties of the intact cortex. Here, we present a detailed network model of the prefrontal cortex, based on a simple computationally efficient single neuron model (simpAdEx), with all parameters derived from in vitro electrophysiological and anatomical data. Without additional tuning, this model could be shown to quantitatively reproduce a wide range of measures from in vivo electrophysiological recordings, to a degree where simulated and experimentally observed activities were statistically indistinguishable. These measures include spike train statistics, membrane potential fluctuations, local field potentials, and the transmission of transient stimulus information across layers. We further demonstrate that model predictions are robust against moderate changes in key parameters, and that synaptic heterogeneity is a crucial ingredient to the quantitative reproduction of in vivo-like electrophysiological behavior. Thus, we have produced a physiologically highly valid, in a quantitative sense, yet computationally efficient PFC network model, which helped to identify key properties underlying spike time dynamics as observed in vivo, and can be harvested for in-depth investigation of the links between physiology and cognition.
For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f–I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron’s response under a wide range of mean-input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx) model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f–I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating (“in vivo-like”) input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model’s generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a “high-throughput” model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Humans can estimate the duration of intervals of time, and psychophysical experiments show that these estimations are subject to timing errors. According to standard theories of timing, these errors increase linearly with the interval to be estimated (Weber's law), and both at longer and shorter intervals, deviations from linearity are reported. This is not easily reconciled with the accumulation of neuronal noise, which would only lead to an increase with the square root of the interval. Here, we offer a neuronal model which explains the form of the error function as a result of a constrained optimization process. The model consists of a number of synfire chains with different transmission times, which project onto a set of readout neurons. We show that an increase in the transmission time corresponds to a superlinear increase of the timing errors. Under the assumption of a fixed chain length, the experimentally observed error function emerges from optimal selection of chains for each given interval. Furthermore, we show how this optimal selection could be implemented by competitive spike-timing dependent plasticity in the connections from the chains to the readout network, and discuss implications of our model on selective temporal learning and possible neural architectures of interval timing.
Mathematical modeling is a useful tool for understanding the neurodynamical and computational mechanisms of cognitive abilities like time perception, and for linking neurophysiology to psychology. In this chapter, we discuss several biophysical models of time perception and how they can be tested against experimental evidence. After a brief overview on the history of computational timing models, we list a number of central psychological and physiological findings that such a model should be able to account for, with a focus on the scaling of the variability of duration estimates with the length of the interval that needs to be estimated. The functional form of this scaling turns out to be predictive of the underlying computational mechanism for time perception. We then present four basic classes of timing models (ramping activity, sequential activation of neuron populations, state space trajectories and neural oscillators) and discuss two specific examples in more detail. Finally, we review to what extent existing theories of time perception adhere to the experimental constraints.
The ability to tell time is a crucial requirement for almost everything we do, but the neural mechanisms of time perception are still largely unknown. One way to approach these mechanisms is through computational modelling. This review provides an overview of the most prominent timing models, experimental evidence in their support, and formal ways for understanding the relationship between mechanisms of time perception and the scaling behavior of time estimation errors. Theories that interpret timing as a byproduct of other computational processes are also discussed. We suggest that there may be in fact a multitude of timing mechanisms in operation, anchored within areaspecific computations, and tailored to different sensory-behavioral requirements. These ultimately have to be integrated into a common frame (a "temporal hub") for the purpose of decision making. This common frame may support Bayesian integration and generalization across sensory modalities. Classes of time perception models The models of time perception which are most frequently discussed in the current literature can be classified into four principle neural mechanisms [1] (Figure 1
A prominent finding in psychophysical experiments on time perception is Weber's law, the linear scaling of timing errors with duration. The ability to reproduce this scaling has been taken as a criterion for the validity of neurocomputational models of time perception. However, the origin of Weber's law remains unknown, and currently only a few models generically reproduce it. Here, we use an information-theoretical framework that considers the neuronal mechanisms of time perception as stochastic processes to investigate the statistical origin of Weber's law in time perception and also its frequently observed deviations. Under the assumption that the brain is able to compute optimal estimates of time, we find that Weber's law only holds exactly if the estimate is based on temporal changes in the variance of the process. In contrast, the timing errors scale sublinearly with time if the systematic changes in the mean of a process are used for estimation, as is the case in the majority of time perception models, while estimates based on temporal correlations result in a superlinear scaling. This hierarchy of temporal information is preserved if several sources of temporal information are available. Furthermore, we consider the case of multiple stochastic processes and study the examples of a covariance-based model and a model based on synfire chains. This approach reveals that existing neurocomputational models of time perception can be classified as mean-, variance- and correlation-based processes and allows predictions about the scaling of the resulting timing errors.
This paper reports how and to what extent the mass distribution of a passive dynamic walker can be tuned to maximize walking speed and stability. An exploration of the complete parameter space of a bipedal walker is performed by numerical optimization, and optimal manifolds are found in terms of speed, the form of which can be explained by a physical analysis of step periods. Stability, quantified by the minimal basin of attraction, is also shown to be high along these manifolds, but with a maximum at only moderate speeds. Furthermore, it is examined how speed and stability change on different ground slopes. The observed dependence of the stability measure on the slope is consistent with the interpretation of the walking cycle as a feedback loop, which also provides an explanation for the destabilization of the gait at higher slopes. Regarding speed, an unexpected decrease at higher slopes is observed. This effect reveals another important feature of passive dynamic walking, a swing-back phase of the swing leg near the end of a step, which decreases walking speed on the one hand, but seems to be crucial for the stability of the gait on the other hand. In conclusion, maximal robustness and highest walking speed are shown to be partly conflicting objectives of optimization.
Oscillations are ubiquitous features of brain dynamics that undergo task-related changes in synchrony, power, and frequency. The impact of those changes on target networks is poorly understood. In this work, we used a biophysically detailed model of prefrontal cortex (PFC) to explore the effects of varying the spike rate, synchrony, and waveform of strong oscillatory inputs on the behavior of cortical networks driven by them. Interacting populations of excitatory and inhibitory neurons with strong feedback inhibition are inhibition-based network oscillators that exhibit resonance (i.e., larger responses to preferred input frequencies). We quantified network responses in terms of mean firing rates and the population frequency of network oscillation; and characterized their behavior in terms of the natural response to asynchronous input and the resonant response to oscillatory inputs. We show that strong feedback inhibition causes the PFC to generate internal (natural) oscillations in the beta/gamma frequency range (>15 Hz) and to maximize principal cell spiking in response to external oscillations at slightly higher frequencies. Importantly, we found that the fastest oscillation frequency that can be relayed by the network maximizes local inhibition and is equal to a frequency even higher than that which maximizes the firing rate of excitatory cells; we call this phenomenon population frequency resonance. This form of resonance is shown to determine the optimal driving frequency for suppressing responses to asynchronous activity. Lastly, we demonstrate that the natural and resonant frequencies can be tuned by changes in neuronal excitability, the duration of feedback inhibition, and dynamic properties of the input. Our results predict that PFC networks are tuned for generating and selectively responding to beta- and gamma-rhythmic signals due to the natural and resonant properties of inhibition-based oscillators. They also suggest strategies for optimizing transcranial stimulation and using oscillatory networks in neuromorphic engineering.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.