No abstract
Spatial orientation is essential to interacting with a physical environment, and better understanding it could contribute to a better understanding of a variety of diseases and disorders that are characterized by deficits in spatial orientation. Many previous studies have focused on the relationship between spatial orientation and individual brain regions, though in recent years studies have begun to examine spatial orientation from a network perspective. This study analyzes dynamic functional network connectivity (dFNC) values extracted from over 800 resting-state fMRI recordings of healthy young adults (age 22-37 years) and applies unsupervised machine learning methods to identify neural brain states that occur across all subjects. We estimated the occupancy rate (OCR) for each subject, which was proportional to the amount of time that they spent in each state, and investigated the link between the OCR and spatial orientation and the state-specific FNC values and spatial orientation controlling for age and sex. Our findings showed that the amount of time subjects spent in a state characterized by increased connectivity within and between visual, auditory, and sensorimotor networks and within the default mode network while at rest corresponded to their performance on tests of spatial orientation. We also found that increased sensorimotor network connectivity in two of the identified states negatively correlated with decreased spatial orientation, further highlighting the relationship between the sensorimotor network and spatial orientation. This study provides insight into how the temporal properties of the functional brain connectivity within and between key brain networks may influence spatial orientation.
In recent years, more biomedical studies have begun to use multimodal data to improve model performance. As such, there is a need for improved multimodal explainability methods. Many studies involving multimodal explainability have used ablation approaches. Ablation requires the modification of input data, which may create out-of-distribution samples and may not always offer a correct explanation. We propose using an alternative gradient-based feature attribution approach, called layer-wise relevance propagation (LRP), to help explain multimodal models. To demonstrate the feasibility of the approach, we selected automated sleep stage classification as our use-case and trained a 1-D convolutional neural network (CNN) with electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) data. We applied LRP to explain the relative importance of each modality to the classification of different sleep stages. Our results showed that across all samples, EEG was most important, followed by EOG, and EMG. For individual sleep stages, EEG and EOG had higher relevance for classifying awake and non-rapid eye movement 1 (NREM1). EOG was most important for classifying REM, and EEG was most relevant for classifying NREM2-NREM3. Also, LRP gave consistent levels of importance to each modality for correctly classified samples across folds and inconsistent levels of importance for incorrectly classified samples. Our results demonstrate the additional insight that gradient-based approaches can provide relative to ablation methods and highlight their feasibility for explaining multimodal electrophysiology classifiers.
The automated feature extraction capabilities of deep learning classifiers have promoted their broader application to EEG analysis. In contrast to earlier machine learning studies that used extracted features and traditional explainability approaches, explainability for classifiers trained on raw data is particularly challenging. As such, studies have begun to present methods that provide insight into the spectral features learned by deep learning classifiers trained on raw EEG. These approaches have two key shortcomings. (1) They involve perturbation, which can create out-of-distribution samples that cause inaccurate explanations. (2) They are global, not local. Local explainability approaches can be used to examine how demographic and clinical variables affected the patterns learned by the classifier. In our study, we present a novel local spectral explainability approach. We apply it to a convolutional neural network trained for automated sleep stage classification. We apply layer-wise relevance propagation to identify the relative importance of the features in the raw EEG and subsequently examine the frequency domain of the explanations to determine the importance of each canonical frequency band locally and globally. We then perform a statistical analysis to determine whether age and sex affected the patterns learned by the classifier for each frequency band and sleep stage. Results showed that δ, β, and γ were the overall most important frequency bands. In addition, age and sex significantly affected the patterns learned by the classifier for most sleep stages and frequency bands. Our study presents a novel spectral explainability approach that could substantially increase the level of insight into classifiers trained on raw EEG.
With the growing use of multimodal data for deep learning classification in healthcare research, more studies have begun to present explainability methods for insight into multimodal classifiers. Among these studies, few have utilized local explainability methods, which could provide (1) insight into the classification of each sample and (2) an opportunity to better understand the effects of latent variables within datasets (e.g., medication of subjects in electrophysiology data). To the best of our knowledge, this opportunity has not yet been explored within multimodal classification. We present a novel local ablation approach that shows the importance of each modality to the correct classification of each class and explore the effects of latent variables upon the classifier. As a use-case, we train a convolutional neural network for automated sleep staging with electroencephalogram (EEG), electrooculogram (EOG), and electromyogram (EMG) data. We find that EEG is the most important modality across most stages, though EOG is particular important for non-rapid eye movement stage 1. Further, we identify significant relationships between the local explanations and subject age, sex, and state of medication which suggest that the classifier learned specific features associated with these variables across multiple modalities and correctly classified samples. Our novel explainability approach has implications for many fields involving multimodal classification. Moreover, our examination of the degree to which demographic and clinical variables may affect classifiers could provide direction for future studies in automated biomarker discovery.
No abstract
Many automated sleep staging studies have used deep learning approaches, and a growing number have used multimodal data to improve their classification performance. However, few studies using multimodal data have provided model explainability. Some have used traditional ablation approaches that “zero out” a modality. However, the samples that result from this ablation are unlikely to be found in real electroencephalography (EEG) data, which could adversely affect the importance estimates that result. Here, we train a convolutional neural network for sleep stage classification with EEG, electrooculograms (EOG), and electromyograms (EMG) and propose an ablation approach that replaces each modality with values that approximate the line-related noise commonly found in electrophysiology data. The relative importance that we identify for each modality is consistent with sleep staging guidelines, with EEG being important for most sleep stages and EOG being important for Rapid Eye Movement (REM) and nonREM stages. EMG showed low relative importance across classes. A comparison of our approach with a “zero out” ablation approach indicates that while the importance results are consistent for the most part, our method accentuates the importance of modalities to the model for the classification of some stages like REM (p < 0.05). These results suggest that a careful, domain-specific selection of an ablation approach may provide a clearer indicator of modality importance. Further, this study provides guidance for future research on using explainability methods with multimodal electrophysiology data.
Apolipoprotein E (APOE) polymorphic alleles are genetic factors associated with Alzheimer's disease (AD) risk. Although previous studies have explored the link between AD genetic risk and static functional network connectivity (sFNC), to the best of our knowledge, no previous studies have evaluated the association between dynamic FNC (dFNC) and AD genetic risk. Here, we examined the link between sFNC, dFNC, and AD genetic risk with a reproducible, data-driven approach. We used rs-fMRI, demographic, and APOE data from cognitively normal individuals (N=894) between 42 to 95 years of age (mean = 70 years). We divided individuals into low, moderate, and high-risk groups. Using Pearson correlation, we calculated sFNC across seven brain networks. We also calculated dFNC with a sliding window and Pearson correlation. The dFNC windows were partitioned into three distinct states with k-means clustering. Next, we calculated the amount of time each subject spent in each state, called occupancy rate or OCR. We compared both sFNC and OCR, estimated from dFNC, across individuals with different genetic risk and found that both sFNC and dFNC are related to AD genetic risk. We found that higher AD risk reduces within-visual sensory network (VSN) sFNC and that individuals with higher AD risk spend more time in a state with lower within-VSN dFNC. Additionally, we found that AD genetic risk affects whole-brain sFNC and dFNC in women but not in men. In conclusion, we presented novel insights into the links between sFNC, dFNC, and AD genetic risk.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.