Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present—as implemented in the IDTxl open-source software—addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelization. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98% on average) for 10,000 time samples. Varying the statistical significance threshold showed a more favorable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and magnetoencephalography experiments.
We present IDTxl (the Information Dynamics Toolkit xl), a new open source Python toolbox for effective network inference from multivariate time series using information theory, available from GitHub (https://github.com/pwollstadt/IDTxl).Information theory (Cover & Thomas, 2006;MacKay, 2003;Shannon, 1948) is the mathematical theory of information and its transmission over communication channels. Information theory provides quantitative measures of the information content of a single random variable (entropy) and of the information shared between two variables (mutual information). The defined measures build on probability theory and solely depend on the probability distributions of the variables involved. As a consequence, the dependence between two variables can be quantified as the information shared between them, without the need to explicitly model a specific type of dependence. Hence, mutual information is a model-free measure of dependence, which makes it a popular choice for the analysis of systems other than communication channels.
Functional and effective networks inferred from time series are at the core of network neuroscience. Interpreting properties of these networks requires inferred network models to reflect key underlying structural features. However, even a few spurious links can severely distort network measures, posing a challenge for functional connectomes. We study the extent to which micro- and macroscopic properties of underlying networks can be inferred by algorithms based on mutual information and bivariate/multivariate transfer entropy. The validation is performed on two macaque connectomes and on synthetic networks with various topologies (regular lattice, small-world, random, scale-free, modular). Simulations are based on a neural mass model and on autoregressive dynamics (employing Gaussian estimators for direct comparison to functional connectivity and Granger causality). We find that multivariate transfer entropy captures key properties of all network structures for longer time series. Bivariate methods can achieve higher recall (sensitivity) for shorter time series but are unable to control false positives (lower specificity) as available data increases. This leads to overestimated clustering, small-world, and rich-club coefficients, underestimated shortest path lengths and hub centrality, and fattened degree distribution tails. Caution should therefore be used when interpreting network properties of functional connectomes obtained via correlation or pairwise statistical dependence measures, rather than more holistic (yet data-hungry) multivariate models.
Edge time series are increasingly used in brain imaging to study the node functional connectivity (nFC) dynamics at the finest temporal resolution while avoiding sliding windows. Here, we lay the mathematical foundations for the edge-centric analysis of neuroimaging time series, explaining why a few high-amplitude cofluctuations drive the nFC across datasets. Our exposition also constitutes a critique of the existing edge-centric studies, showing that their main findings can be derived from the nFC under a static null hypothesis that disregards temporal correlations. Testing the analytic predictions on functional MRI data from the Human Connectome Project confirms that the nFC can explain most variation in the edge FC matrix, the edge communities, the large cofluctuations, and the corresponding spatial patterns. We encourage the use of dynamic measures in future research, which exploit the temporal structure of the edge time series and cannot be replicated by static null models.
Classic psychedelic-induced ego dissolution involves a shift in the sense of self and blurring of boundary between the self and the world. A similar phenomenon is identified in psychopathology and is associated to the balance of anticorrelated activity between the default mode network (DMN) – which directs attention inwards – and the salience network (SN) – which recruits the dorsal attention network (DAN) to direct attention outward. To test whether change in anticorrelated networks underlie the peak effects of LSD, we applied dynamic causal modeling to infer effective connectivity of resting state functional MRI scans from a study of 25 healthy adults who were administered 100mg of LSD, or placebo. We found that change in inhibitory effective connectivity from the SN to DMN became excitatory, and inhibitory effective connectivity from DMN to DAN decreased under the peak effect of LSD. These changes in connectivity reflect diminution of the anticorrelation between resting state networks that may be a key neural mechanism of LSD-induced ego dissolution. Our findings suggest the hierarchically organised balance of resting state networks is a central feature in the construct of self.SignificanceThe findings can inform the parallel between the maintenance of subject-object boundary and changes to anticorrelated canonical resting state brain networks. Effective connectivity informs the hierarchical organisation of brain networks underlying modes of perception. Moreover, the anticorrelation of brain networks is an important measure of mental function. Understanding the neural mechanisms of anticorrelation change under psychedelics help identify its relationship to psychosis and its association to psychedelic assisted therapeutic outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.