Standard human EEG systems based on spatial Nyquist estimates suggest that 20–30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared “super-Nyquist” density EEG (“SND”) with Nyquist density (“ND”) arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains. Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience.
We present a novel signal processing algorithm for automated, noninvasive detection of Cortical Spreading Depolarizations (CSDs) using electroencephalography (EEG) signals and validate the algorithm on simulated EEG signals. CSDs are waves of neurochemical changes that suppress neuronal activity as they propagate across the brain's cortical surface. CSDs are believed to mediate secondary brain damage after brain trauma and cerebrovascular diseases like stroke. We address key challenges in detecting CSDs from EEG signals: (i) decay of high spatial frequencies as they travel from the cortical surface to the scalp surface; and (ii) presence of sulci and gyri, which makes it difficult to track the CSD waves as they travel across the cortex. Our algorithm detects and tracks "wavefronts" of the CSD wave, and stitches together data across space and time to decide on the presence of a CSD wave. To test our algorithm, we provide different models and complex patterns of CSD waves, including different widths of CSD suppressions, and use these models to simulate scalp EEG signals using head models of 4 subjects from the OASIS dataset. Our results suggest that the average width of suppression that a low-density EEG grid of 40 electrodes can detect is 1.1 cm, which includes a vast majority of CSD suppressions, but not all. A higher density EEG grid having 340 electrodes can detect complex CSD patterns as thin as 0.43 cm (less than minimum widths reported in prior works), among which single-gyrus propagation is the hardest to detect because of its small suppression area.
The needs of a business (e.g., hiring) may require the use of certain features that are critical in a way that any discrimination arising due to them should be exempted. In this work, we propose a novel information-theoretic decomposition of the total discrimination (in a counterfactual sense) into a non-exempt component, which quantifies the part of the discrimination that cannot be accounted for by the critical features, and an exempt component, which quantifies the remaining discrimination. Our decomposition enables selective removal of the non-exempt component if desired. We arrive at this decomposition through examples and counterexamples that enable us to first obtain a set of desirable properties that any measure of non-exempt discrimination should satisfy. We then demonstrate that our proposed quantification of non-exempt discrimination satisfies all of them. This decomposition leverages a body of work from information theory called Partial Information Decomposition (PID). We also obtain an impossibility result showing that no observational measure of non-exempt discrimination can satisfy all of the desired properties, which leads us to relax our goals and examine alternative observational measures that satisfy only some of these properties. We then perform a case study using one observational measure to show how one might train a model allowing for exemption of discrimination due to critical features.
We develop a theoretical framework for defining and identifying flows of information in computational systems. Here, a computational system is assumed to be a directed graph, with "clocked" nodes that send transmissions to each other along the edges of the graph at discrete points in time. A few measures of information flow have been proposed previously in the literature, and measures of directed causal influence are currently being used as a heuristic proxy for information flow. However, there is as yet no rigorous treatment of the problem with formal definitions and clearly stated assumptions, and the process of defining information flow is often conflated with the problem of estimating it. In this work, we provide a new information-theoretic definition for information flow in a computational system, which we motivate using a series of examples. We then show that this definition satisfies intuitively desirable properties, including the existence of "information paths", along which information flows from the input of the computational system to its output. Finally, we describe how information flow might be estimated in a noiseless setting, and provide an algorithm to identify information paths on the time-unrolled graph of a computational system. 1 Causal in the "Signals and Systems" sense of the word, where a node cannot make use of future transmissions [41]. 2 Although the work of Ahlswede et al. (2000) is titled "Network Information Flow", it actually addresses a different problem: one of the achievable rate region of a broadcast network and the optimal coding strategy that achieves this rate. In contrast to their work, which concentrates on characterizing and achieving the optimal rate, our focus is on understanding how information Figure 1: A diagram showing an example of a how a complete directed graph is unrolled to create a time-unrolled graph. On the left, we show a complete directed graph G * that has three nodes, V * = {A, B, C}. These nodes are fully connected to each other via edges E * , including self-edges. On the right, we show how G * has been unrolled using time indices T = {0, 1, 2} to obtain a time-unrolled graph G. The set of all nodes at time t = 0 is V 0 and the set of all (outgoing) edges at time t = 0 is denoted E 0 . As an example, we have shown an arbitrary edge E 0 ∈ E 0 (here, E 0 = (C 0 , B 1 )) and the transmission on that edge, X(E 0 ). As another example, we show a "self-edge" in the time-unrolled graph, E 1 ∈ E 1 , which in this case is E 1 = (A 1 , A 2 ). Also depicted is the transmission X(E 1 ) on this self-edge, which is interpreted as the contents of the memory of node A from t = 1 to t = 2. The message M arrives at the input node A 0 , but could in general be available at more than one node at t = 0. communicate to each other over time. We define a random variable model for the nodes' transmissions, and demonstrate how each node computes them. We also explain what we mean by a "message", and formally define the input nodes of the computational system. Definition 1 (Comple...
Standard human EEG systems based on spatial Nyquist estimates suggest that 20-30 mm electrode spacing suffices to capture neural signals on the scalp, but recent studies posit that increasing sensor density can provide higher resolution neural information. Here, we compared "super-Nyquist" density EEG ("SND") with Nyquist density ("ND") arrays for assessing the spatiotemporal aspects of early visual processing. EEG was measured from 128 electrodes arranged over occipitotemporal brain regions (14 mm spacing) while participants viewed flickering checkerboard stimuli. Analyses compared SND with ND-equivalent subsets of the same electrodes. Frequency-tagged stimuli were classified more accurately with SND than ND arrays in both the time and the frequency domains.Representational similarity analysis revealed that a computational model of V1 correlated more highly with the SND than the ND array. Overall, SND EEG captured more neural information from visual cortex, arguing for increased development of this approach in basic and translational neuroscience.
Granger causality is an established statistical measure of the "causal influence" that one stochastic process X has on another process Y . Along with its more recent generalization -Directed Information -Granger Causality has been used extensively in neuroscience, and in complex interconnected systems in general, to infer statistical causal influences. More recently, many works compare the Granger causality metrics along forward and reverse links (from X to Y and from Y to X), and interpret the direction of greater causal influence as the "direction of information flow". In this paper, we question whether the direction yielded by comparing Granger Causality or Directed Information along forward and reverse links is always the same as the direction of information flow. We explore this question using two simple theoretical experiments, in which the true direction of information flow (the "ground truth") is known by design. The experiments are based on a communication system with a feedback channel, and employ a strategy inspired by the work of Schalkwijk and Kailath. We show that in these experiments, the direction of information flow can be opposite to the direction of greater Granger causal influence or Directed Information. We also provide information-theoretic intuition for why such counterexamples are not surprising, and why Granger causality-based information-flow inferences will only get more tenuous in larger networks. We conclude that one must not use comparison/difference of Granger causality to infer the direction of information flow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.