Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain’s activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.
Multielectrode recordings have revealed zero time lag synchronization among remote cerebral cortical areas. However, the axonal conduction delays among such distant regions can amount to several tens of milliseconds. It is still unclear which mechanism is giving rise to isochronous discharge of widely distributed neurons, despite such latencies. Here, we investigate the synchronization properties of a simple network motif and found that, even in the presence of large axonal conduction delays, distant neuronal populations self-organize into lag-free oscillations. According to our results, cortico-cortical association fibers and certain corticothalamo-cortical loops represent ideal circuits to circumvent the phase shifts and time lags associated with conduction delays.thalamocortical system ͉ isochronous oscillations ͉ phase locking ͉ long-range synchronization ͉ axonal latency C ells in the visual cortex of mammals tend to fire simultaneously when activated by related features of a visual stimulus (1-4). This observation provided some of the early evidence that the nervous system may use an internal temporal code to process information. Since then, multicell electrophysiological studies have revealed the synchronous discharge of neurons distributed in different structures of the cerebral cortex, hippocampal formation, and thalamus (5, 6). Its biological significance derives from the observation that such precise and coordinated spike timing correlates with perception and behavioral performance (7-10). Remarkably, synchrony of neuronal activity is not limited to short-range interactions within a cortical patch. Interareal synchronization across cortical regions including interhemispheric areas has been observed in several tasks (7,9,(11)(12)(13)(14). The topological specificity and temporal unfolding of the synchrony reported in such studies are in agreement with its assumed role of subserving the effective ''coupling'' of the neuronal dynamics of the respective regions (9, 15).Beyond its functional relevance, the zero time lag synchrony among such distant neuronal ensembles must be established by mechanisms that are able to compensate for the delays involved in the neuronal communication. Latencies in conducting nerve impulses down axonal processes can amount to delays of several tens of milliseconds between the generation of a spike in a presynaptic cell and the elicitation of a postsynaptic potential (16). The question is how, despite such temporal delays, the reciprocal interactions between two brain regions can lead to the associated neural populations to fire in unison.Direct cortico-cortical fibers are major pathways of transareal communication and thus one principal substrate for the establishment of long-range synchrony. For instance, severing the corpus callosum was observed to disrupt the interhemispheric synchrony among homotopic cortical areas 17 in the cat (17). However, it is not clear whether direct excitatory corticocortical connections alone can mediate the zero phase synchronization of reciproca...
Evolution of cooperation and competition can appear when multiple adaptive agents share a biological, social, or technological niche. In the present work we study how cooperation and competition emerge between autonomous agents that learn by reinforcement while using only their raw visual input as the state representation. In particular, we extend the Deep Q-Learning framework to multiagent environments to investigate the interaction between two learning agents in the well-known video game Pong. By manipulating the classical rewarding scheme of Pong we show how competitive and collaborative behaviors emerge. We also describe the progression from competitive to collaborative behavior when the incentive to cooperate is increased. Finally we show how learning by playing against another adaptive agent, instead of against a hard-wired algorithm, results in more robust strategies. The present work shows that Deep Q-Networks can become a useful tool for studying decentralized learning of multiagent systems coping with high-dimensional environments.
In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener’s principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics.
We show that isochronous synchronization between two delay-coupled oscillators can be achieved by relaying the dynamics via a third mediating element, which surprisingly lags behind the synchronized outer elements. The zero-lag synchronization thus obtained is robust over a considerable parameter range. We substantiate our claims with experimental and numerical evidence of such synchronization solutions in a chain of three coupled semiconductor lasers with long interelement coupling delays. The generality of the mechanism is validated in a neuronal model with the same coupling architecture. Thus, our results show that zero-lag synchronized chaotic dynamical states can occur over long distances through relaying, without restriction by the amount of delay.
BackgroundTransfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present.ResultsIn simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected.ConclusionsTRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox.
Cross-frequency coupling (CFC) has been proposed to coordinate neural dynamics across spatial and temporal scales. Despite its potential relevance for understanding healthy and pathological brain function, the standard CFC analysis and physiological interpretation come with fundamental problems. For example, apparent CFC can appear because of spectral correlations due to common non-stationarities that may arise in the total absence of interactions between neural frequency components. To provide a road map towards an improved mechanistic understanding of CFC, we organize the available and potential novel statistical/modeling approaches according to their biophysical interpretability. While we do not provide solutions for all the problems described, we provide a list of practical recommendations to avoid common errors and to enhance the interpretability of CFC analysis.. CC-BY-NC-ND 4.0 International license It is made available under a (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint . http://dx.doi.org/10.1101/005926 doi: bioRxiv preprint first posted online Jun. 4, 2014; 2 HighlightsFundamental caveats and confounds in the methodology of assessing CFC are discussed.Significant CFC can be observed without any underlying physiological coupling.Non-stationarity of a time-series leads to spectral correlations interpreted as CFC.We offer practical recommendations, which can relieve some of the current confounds.Further theoretical and experimental work is needed to ground the CFC analysis.. CC-BY-NC-ND 4.0 International license It is made available under a (which was not peer-reviewed) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity.The copyright holder for this preprint . http://dx.doi.org/10.1101/005926 doi: bioRxiv preprint first posted online Jun. 4, 2014; 3 Cross-frequency coupling: How much is that in real money?One of the central questions in neuroscience is how neural activity is coordinated across different spatial and temporal scales. An elegant solution to this problem could be that the activity of local neural populations is modulated according to the global neuronal dynamics. As larger populations oscillate and synchronize at lower frequencies and smaller ensembles are active at higher frequencies [1], cross-frequency coupling would facilitate flexible coordination of neural activity simultaneously in time and space. In line with this proposal, many studies have reported such crossfrequency relationships [2][3][4]. Especially phase-amplitude CFC, where the phase of the low frequency component modulates the amplitude of the high frequency activity, has been claimed to play important functional roles in neural information processing and cognition, e.g. in learning and memory [4][5][6][7][8] Furthermore, changes in CFC patterns have been linked to certain neurological and mental disorders such as Parkinson's disease [9][10][11], schizo...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.