Intracranially-recorded interictal high-frequency oscillations have been proposed as a promising spatial biomarker of the epileptogenic zone. However, its visual verification is time-consuming and exhibits poor inter-rater reliability. Furthermore, no method is currently available to distinguish high-frequency oscillations generated from the epileptogenic zone (epileptogenic high-frequency oscillations) from those generated from other areas (non-epileptogenic high-frequency oscillations). To address these issues, we constructed a deep learning-based algorithm using chronic intracranial EEG data via subdural grids from 19 children with medication-resistant neocortical epilepsy to: 1) replicate human expert annotation of artifacts and high-frequency oscillations with or without spikes, and 2) discover epileptogenic high-frequency oscillations by designing a novel weakly supervised model. The “purification power” of deep learning is then used to automatically relabel the high-frequency oscillations to distill epileptogenic high-frequency oscillations. Using 12,958 annotated high-frequency oscillation events from 19 patients, the model achieved 96.3% accuracy on artifact detection (F1 score=96.8%) and 86.5% accuracy on classifying high-frequency oscillations with or without spikes (F1 score=80.8%) using patient-wise cross-validation. Based on the algorithm trained from 84,602 high-frequency oscillation events from nine patients who achieved seizure-freedom after resection, the majority of such discovered epileptogenic high-frequency oscillations were found to be ones with spikes (78.6%, p < 0.001). While the resection ratio of detected high-frequency oscillations (number of resected events/number of detected events) did not correlate significantly with post-operative seizure freedom (the area under the curve=0.76, p = 0.06), the resection ratio of epileptogenic high-frequency oscillations positively correlated with post-operative seizure freedom (the area under the curve=0.87, p = 0.01). We discovered that epileptogenic high-frequency oscillations had a higher signal intensity associated with ripple (80-250 Hz) and fast ripple (250-500 Hz) bands at the high-frequency oscillation onset and with a lower frequency band throughout the event time window (the inverted T-shaped), compared to non-epileptogenic high-frequency oscillations. We then designed perturbations on the input of the trained model for non-epileptogenic high-frequency oscillations to determine the model’s decision-making logic. The model confidence significantly increased towards epileptogenic high-frequency oscillations by the artificial introduction of the inverted T-shaped signal template (mean probability increase: 0.285, p < 0.001), and by the artificial insertion of spike-like signals into the time domain (mean probability increase: 0.452, p < 0.001). With this deep learning-based framework, we reliably replicated high-frequency oscillation classification tasks by human experts. Using a reverse engineering technique, we distinguished epileptogenic high-frequency oscillations from others and identified its salient features that aligned with current knowledge.
Objective: Intracranially-recorded interictal high-frequency oscillations (HFOs) have been proposed as a promising spatial biomarker of the epileptogenic zone. However, HFOs can also be recorded in the healthy brain regions, which complicates the interpretation of HFOs. The present study aimed to characterize salient features of physiological HFOs using deep learning (DL). Methods: We studied children with neocortical epilepsy who underwent intracranial strip/grid evaluation. Time-series EEG data were transformed into DL training inputs. The eloquent cortex (EC) was defined by functional cortical mapping and used as a DL label. Morphological characteristics of HFOs obtained from EC (ecHFOs) were distilled and interpreted through a novel weakly supervised DL model. Results: A total of 63,379 interictal intracranially-recorded HFOs from 18 children were analyzed. The ecHFOs had lower amplitude throughout the 80-500 Hz frequency band around the HFO onset and also had a lower signal amplitude in the low frequency band throughout a one-second time window than non-ecHFOs, resembling a bellshaped template in the time-frequency map. A minority of ecHFOs were HFOs with spikes (22.9%). Such morphological characteristics were confirmed to influence DL model prediction via perturbation analyses. Using the resection ratio (removed HFOs/detected HFOs) of non-ecHFOs, the prediction of postoperative seizure outcomes improved compared to using uncorrected HFOs (area under the ROC curve of 0.82, increased from 0.76). Interpretation: We characterized salient features of physiological HFOs using a DL algorithm. Our results suggested that this DL-based HFO classification, once trained, might help separate physiological from pathological HFOs, and efficiently guide surgical resection using HFOs.
Extracting meaning from a dynamic and variable flow of incoming information is a major goal of both natural and artificial intelligence. Computer vision (CV) guided by deep learning (DL) has made significant strides in recognizing a specific identity despite highly variable attributes. This is the same challenge faced by the nervous system and partially addressed by the concept cells—neurons exhibiting selective firing in response to specific persons/places, described in the human medial temporal lobe (MTL) . Yet, access to neurons representing a particular concept is limited due to these neurons’ sparse coding. It is conceivable, however, that the information required for such decoding is present in relatively small neuronal populations. To evaluate how well neuronal populations encode identity information in natural settings, we recorded neuronal activity from multiple brain regions of nine neurosurgical epilepsy patients implanted with depth electrodes, while the subjects watched an episode of the TV series “24”. First, we devised a minimally supervised CV algorithm (with comparable performance against manually-labeled data) to detect the most prevalent characters (above 1% overall appearance) in each frame. Next, we implemented DL models that used the time-varying population neural data as inputs and decoded the visual presence of the four main characters throughout the episode. This methodology allowed us to compare “computer vision” with “neuronal vision”—footprints associated with each character present in the activity of a subset of neurons—and identify the brain regions that contributed to this decoding process. We then tested the DL models during a recognition memory task following movie viewing where subjects were asked to recognize clip segments from the presented episode. DL model activations were not only modulated by the presence of the corresponding characters but also by participants’ subjective memory of whether they had seen the clip segment, and by the associative strengths of the characters in the narrative plot. The described approach can offer novel ways to probe the representation of concepts in time-evolving dynamic behavioral tasks. Further, the results suggest that the information required to robustly decode concepts is present in the population activity of only tens of neurons even in brain regions beyond MTL.
Extracting meaning from a dynamic and variable flow of incoming information is a major goal of both natural and artificial intelligence. Computer vision (CV) guided by deep learning (DL) has made significant strides in recognizing a specific identity despite highly variable attributes. This is the same challenge faced by the nervous system and partially addressed by the concept cells--neurons exhibiting selective firing in response to specific persons/places, described in the human medial temporal lobe (MTL). Yet, access to neurons representing a particular concept is limited due to these neurons' sparse coding. It is conceivable, however, that the information required for such decoding is present in relatively small neuronal populations. To evaluate how well neuronal populations encode identity information in natural settings, we recorded neuronal activity from multiple brain regions of nine neurosurgical epilepsy patients implanted with depth electrodes, while the subjects watched an episode of the TV series "24". We implemented DL models that used the time-varying population neural data as inputs and decoded the visual presence of the main characters in each frame. Before training and testing the DL models, we devised a minimally supervised CV algorithm (with comparable performance against manually-labelled data) to detect and label all the important characters in each frame. This methodology allowed us to compare "computer vision" with "neuronal vision"--footprints associated with each character present in the activity of a subset of neurons--and identify the brain regions that contributed to this decoding process. We then tested the DL models during a recognition memory task following movie viewing where subjects were asked to recognize clip segments from the presented episode. DL model activations were not only modulated by the presence of the corresponding characters but also by participants' subjective memory of whether they had seen the clip segment, and by the associative strengths of the characters in the narrative plot. The described approach can offer novel ways to probe the representation of concepts in time-evolving dynamic behavioral tasks. Further, the results suggest that the information required to robustly decode concepts is present in the population activity of only tens of neurons even in brain regions beyond MTL.
Intracranially-recorded interictal high-frequency oscillations (HFOs) have been proposed as a promising spatial biomarker of the epileptogenic zone. However, visual verification of HFOs is time-consuming and exhibits poor inter-rater reliability. Furthermore, no method is currently available to distinguish HFOs generated from the epileptogenic zone (epileptogenic HFOs: eHFOs) from those generated from other areas (non-epileptogenic HFOs: non-eHFOs). To address these issues, we constructed a deep learning (DL)-based algorithm using HFO events from chronic intracranial electroencephalogram (iEEG) data via subdural grids from 19 children with medication-resistant neocortical epilepsy to: 1) replicate human expert annotation of artifacts and HFOs with or without spikes, and 2) discover eHFOs by designing a novel weakly supervised model (HFOs from the resected brain regions are initially labeled as eHFOs, and those from the preserved brain regions as non-eHFOs). The "purification power" of DL is then used to automatically relabel the HFOs to distill eHFOs. Using 12,958 annotated HFO events from 19 patients, the model achieved 96.3% accuracy on artifact detection (F1 score = 96.8%) and 86.5% accuracy on classifying HFOs with or without spikes (F1 score = 80.8%) using patient-wise cross-validation. Based on the DL-based algorithm trained from 84,602 HFO events from nine patients who achieved seizure-freedom after resection, the majority of such DL-discovered eHFOs were found to be HFOs with spikes (78.6%, p < 0.001). While the resection ratio of detected HFOs (number of resected HFOs/number of detected HFOs) did not correlate significantly with post-operative seizure freedom (the area under the curve [AUC]=0.76, p=0.06), the resection ratio of eHFOs positively correlated with post-operative seizure freedom (AUC=0.87, p=0.01). We discovered that the eHFOs had a higher signal intensity associated with ripple (80-250 Hz) and fast ripple (250-500 Hz) bands at the HFO onset and with a lower frequency band throughout the event time window (the inverted T-shaped), compared to non-eHFOs. We then designed perturbations on the input of the trained model for non-eHFOs to determine the model's decision-making logic. The model probability significantly increased towards eHFOs by the artificial introduction of signals in the inverted T-shaped frequency bands (mean probability increase: 0.285, p < 0.001), and by the artificial insertion of spike-like signals into the time domain (mean probability increase: 0.452, p < 0.001). With this DL-based framework, we reliably replicated HFO classification tasks by human experts. Using a reverse engineering technique, we distinguished eHFOs from others and identified salient features of eHFOs that aligned with current knowledge.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.