2014
DOI: 10.1371/journal.pone.0115764
|View full text |Cite
|
Sign up to set email alerts
|

Multiplex Networks of Cortical and Hippocampal Neurons Revealed at Different Timescales

Abstract: Recent studies have emphasized the importance of multiplex networks – interdependent networks with shared nodes and different types of connections – in systems primarily outside of neuroscience. Though the multiplex properties of networks are frequently not considered, most networks are actually multiplex networks and the multiplex specific features of networks can greatly affect network behavior (e.g. fault tolerance). Thus, the study of networks of neurons could potentially be greatly enhanced using a multip… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
65
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(70 citation statements)
references
References 135 publications
3
65
0
Order By: Relevance
“…This approach to cover a longer history at a low dimensionality amounts to a compressing of the information in the history of the process, aiming to retain what we perceive to be the most relevant information. This approach is similar to the one used by Timme et al [27], except for the use of nonuniform binwidths in our case. Alternative approaches to large bin widths exist that are either based (i) on nonuniform embedding, picking the most informative past samples (or bins with a small width on the order of the inverse sampling rate) from a collection of candidates (e.g., [28][29][30]), and the IDT xl toolbox [22]; or (ii) on varying the lag between an a vector of evenly spaced past bins and the current sample [4,31,32], but both of these approaches might be less suitable for relatively sparse binary data, such as spike trains.…”
Section: Electrophysiological Data-acquisition and Preprocessingmentioning
confidence: 96%
“…This approach to cover a longer history at a low dimensionality amounts to a compressing of the information in the history of the process, aiming to retain what we perceive to be the most relevant information. This approach is similar to the one used by Timme et al [27], except for the use of nonuniform binwidths in our case. Alternative approaches to large bin widths exist that are either based (i) on nonuniform embedding, picking the most informative past samples (or bins with a small width on the order of the inverse sampling rate) from a collection of candidates (e.g., [28][29][30]), and the IDT xl toolbox [22]; or (ii) on varying the lag between an a vector of evenly spaced past bins and the current sample [4,31,32], but both of these approaches might be less suitable for relatively sparse binary data, such as spike trains.…”
Section: Electrophysiological Data-acquisition and Preprocessingmentioning
confidence: 96%
“…How should one rigorously compute the transfer entropy for such data sets? Previous approaches have attempted to apply the discrete time formalism to such systems in a number of ways, for example, in examining the information between most recent events in an economic setting [22], or in discretizing time (i.e., time binning) for spiking neural processes [23][24][25][26][27]. Such approaches necessarily recast the dynamics in order to make empirical approxima-tions, which may ignore key mechanisms relevant to the source-target relationship.…”
Section: Introductionmentioning
confidence: 99%
“…Due to the widespread interest in neural connectivity (Bullmore and Sporns, 2009; Friston, 2011), transfer entropy has been widely used in the literature (for example, Honey et al, 2007; Lizier et al, 2008; Ito et al, 2011; Vicente et al, 2011; Timme et al, 2014b, 2016; Wibral et al, 2014b; Nigam et al, 2016; Bossomaier et al, 2016). Numerous methods have been employed to define past and future state (Staniek and Lehnertz, 2008; Ito et al, 2011; Wibral et al, 2013; Timme et al, 2014b). These methods allow for the exploration of interactions over certain time scales, the search for interactions with set delays (e.g., synaptic connectivity), or interactions involving patterns of activity.…”
Section: Methodsmentioning
confidence: 99%
“…Surrogate data testing or Monte Carlo analysis is frequently the solution to significance testing in information theory analyses (Lindner et al, 2011; Timme et al, 2014b; Wibral et al, 2014a; Asaad et al, 2017). This type of analysis is performed by generating surrogate null model data that preserve certain aspects of the data while randomizing other aspects.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation