We describe the first large scale analysis of gene translation that is based on a model that takes into account the physical and dynamical nature of this process. The Ribosomal Flow Model (RFM) predicts fundamental features of the translation process, including translation rates, protein abundance levels, ribosomal densities and the relation between all these variables, better than alternative (‘non-physical’) approaches. In addition, we show that the RFM can be used for accurate inference of various other quantities including genes' initiation rates and translation costs. These quantities could not be inferred by previous predictors. We find that increasing the number of available ribosomes (or equivalently the initiation rate) increases the genomic translation rate and the mean ribosome density only up to a certain point, beyond which both saturate. Strikingly, assuming that the translation system is tuned to work at the pre-saturation point maximizes the predictive power of the model with respect to experimental data. This result suggests that in all organisms that were analyzed (from bacteria to Human), the global initiation rate is optimized to attain the pre-saturation point. The fact that similar results were not observed for heterologous genes indicates that this feature is under selection. Remarkably, the gap between the performance of the RFM and alternative predictors is strikingly large in the case of heterologous genes, testifying to the model's promising biotechnological value in predicting the abundance of heterologous proteins before expressing them in the desired host.
Parallel recordings of spike trains of several single cortical neurons in behaving monkeys were analyzed as a hidden Markov process. The parallel spike trains were considered as a multivariate Poisson process whose vector firing rates change with time. As a consequence of this approach, the complete recording can be segmented into a sequence of a few statistically discriminated hidden states, whose dynamics are modeled as a first-order Markov chain. The biological validity and benefits of this approach were examined in several independent ways: (i) the statistical consistency of the segmentation and its correspondence to the behavior of the animal; (ii) direct measurement of the collective flips of activity, obtained by the model; and (iii) the relation between the segmentation and the pair-wise shortterm cross-correlations between the recorded spike trains. Comparison with surrogate data was also carried out for each of the above examinations to assure their significance. Our results indicated the existence of well-separated states of activity, within which the firing rates were approximately stationary. With our present data we could reliably discriminate six to eight such states. The transitions between states were fast and were associated with concomitant changes of firing rates of several neurons. Different behavioral modes and stimuli were consistently reflected by different states of neural activity. Moreover, the pair-wise correlations between neurons varied considerably between the different states, supporting the hypothesis that these distinct states were brought about by the cooperative action of many neurons.While early sensory and late motor processes can be carried out in parallel, many intermediate processes are carried out serially (1-4). Our own introspective experience tells us that our thought processes evolve serially one after the other. Some current models of neural networks (5-7) also suggest a series of quasi-stable states which follow each other in succession.Usually, the analysis of the activity of single neurons is done by looking at their firing rates in relation to some external marker, such as a visual stimulus or a movement. In the work presented here, we treat the activity of several single neurons, which were recorded in parallel, as a spike-count vector-i.e., a vector whose first component is the number of spikes generated by the first neuron in a given time window, the second component is the spike count of the second neuron in the same window, and so forth.Until recently, almost no attempt was made to search for experimental evidence that the brain, or some part of it, goes through a sequence of distinct states.l In the present work we examined whether spike count vectors can be regarded as the output of a hidden Markov process which switches among discrete states of underlying collective activity.The HMM is a well-known technique of stochastic modeling used so far mostly for speech and handwriting recognition (10). Within this model, the observations are considered as...
To test whether spiking activity of six to eight simultaneously recorded neurons in the frontal cortex of a monkey can be characterized by a sequence of discrete and stable states, neuronal activity is analyzed by a hidden Markov model (HMM). Using the HMM method, we are able to detect distinct states of neuronal activity within which firing rates are approximately stationary. Transitions between states, as expressed by concomitant changes in the firing rates of several units, occur quite abruptly. The significance and consistency of the states are confirmed by comparison with simulated data. The detected states are specific to a monkey's response in a delayed localization task, allowing correct prediction of the response in approximately 90% of the trials. Similar predictive power is achieved by a model based simply on the response histograms (PSTH) of the units. The two models reach this predictive ability with different time courses: the PSTH model gains predictive power with a higher rate in the first second of the delay, and the HMM gains predictive power with higher rate in the next 3 sec. In this later period, conventional methods such as the PSTH cannot detect any firing rate modulations, but the HMM successfully captures transitions between distinct states that are specific to the monkey's behavioral response and occur at highly variable times from trial to trial. Our results suggest that neuronal activity in this later period is described best as transitions among distinct states that may reflect discrete steps in the monkey's mental processes.
Genetic robustness characterizes the constancy of the phenotype in face of heritable perturbations. Previous investigations have used comprehensive single and double gene knockouts to study gene essentiality and pairwise gene interactions in the yeast Saccharomyces cerevisiae. Here we conduct an in silico multiple knockout investigation of a flux balance analysis model of the yeast's metabolic network. Cataloging gene sets that provide mutual functional backup, we identify sets of up to eight interacting genes and characterize the 'k robustness' (the depth of backup interactions) of each gene. We find that 74% (360) of the metabolic genes participate in processes that are essential to growth in a standard laboratory environment, compared with only 13% previously found to be essential using single knockouts. The genes' k robustness is shown to be a solid indicator of their biological buffering capacity and is correlated with both the genes' environmental specificity and their evolutionary retention.
SUMMARY The EM algorithm is a numerical technique for the evaluation of maximum likelihood estimates for parameters describing incomplete data models. It is easy to apply in many problems and is stable but slow. The algorithm fails to provide a consistent estimator of the standard errors of the maximum likelihood estimates unless the additional analysis required by the Louis method is performed. Newton‐type or other gradient methods are faster and provide error estimates but tend to be unstable and require the analytical evaluation of likelihoods to derive expressions for the score function and (at least) approximations to the Fisher information matrix. The purpose of this paper is to expand on a result by Fisher that permits a unification of EM methodology and Newton methods. The evaluation of the individual observation‐by‐observation score functions of the incomplete data is a by‐product of the application of the E step of the EM algorithm. Once these become available, the Fisher information matrix may be consistently estimated, and the M step may be replaced by a fast Newton‐type step.
Research with humans and primates shows that the developmental course of the brain involves synaptic overgrowth followed by marked selective pruning. Previous explanations have suggested that this intriguing, seemingly wasteful phenomenon is utilized to remove, "erroneous" synapses. We prove that this interpretation is wrong if synapses are Hebbian. Under limited metabolic energy resources restricting the amount and strength of synapses, we show that memory performance is maximized if synapses are first overgrown and then pruned following optimal "minimal-value" deletion. This optimal strategy leads to interesting insights concerning childhood amnesia.
This letter presents the multi-perturbation Shapley value analysis (MSA), an axiomatic, scalable, and rigorous method for deducing causal function localization from multiple perturbations data. The MSA, based on fundamental concepts from game theory, accurately quantifies the contributions of network elements and their interactions, overcoming several shortcomings of previous function localization approaches. Its successful operation is demonstrated in both the analysis of a neurophysiological model and of reversible deactivation data. The MSA has a wide range of potential applications, including the analysis of reversible deactivation experiments, neuronal laser ablations, and transcranial magnetic stimulation "virtual lesions," as well as in providing insight into the inner workings of computational models of neurophysiological systems.
Reinforcement learning is a fundamental process by which organisms learn to achieve goals from their interactions with the environment. Using evolutionary computation techniques we evolve (near-)optimal neuronal learning rules in a simple neural network model of reinforcement learning in bumblebees foraging for nectar. The resulting neural networks exhibit efficient reinforcement learning, allowing the bees to respond rapidly to changes in reward contingencies. The evolved synaptic plasticity dynamics give rise to varying exploration/exploitation levels and to the well-documented choice strategies of risk aversion and probability matching. Additionally, risk aversion is shown to emerge even when bees are evolved in a completely risk-less environment. In contrast to existing theories in economics and game theory, risk-averse behavior is shown to be a direct consequence of (near-)optimal reinforcement learning, without requiring additional assumptions such as the existence of a nonlinear subjective utility function for rewards. Our results are corroborated by a rigorous mathematical analysis, and their robustness in real-world situations is supported by experiments in a mobile robot. Thus we provide a biologically founded, parsimonious, and novel explanation for risk aversion and probability matching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.