2021
DOI: 10.1016/j.media.2021.102233
|View full text |Cite
|
Sign up to set email alerts
|

BrainGNN: Interpretable Brain Graph Neural Network for fMRI Analysis

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
136
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 311 publications
(162 citation statements)
references
References 58 publications
1
136
0
Order By: Relevance
“…Next, they interpret the importance of each sub-graph identified by performing an extensive comparison between their GNN and existing classifiers. On the other hand, (78,80) proposed a GAT-based architecture to predict the most discriminative ROIs for identifying a disease. They specifically proposed a pooling layer along with a regularization loss term to soften the distribution of the node pooling scores generated by the network.…”
Section: Biomarker Identificationmentioning
confidence: 99%
“…Next, they interpret the importance of each sub-graph identified by performing an extensive comparison between their GNN and existing classifiers. On the other hand, (78,80) proposed a GAT-based architecture to predict the most discriminative ROIs for identifying a disease. They specifically proposed a pooling layer along with a regularization loss term to soften the distribution of the node pooling scores generated by the network.…”
Section: Biomarker Identificationmentioning
confidence: 99%
“…In other words, the GCN is trained on the whole graph and tested on subgraphs, such that they could determine the importance of subgraphs and nodes. In both works from Li et al [23], [88], the authors also improved their individual graph level analysis by proposing a BrainGNN and a pooling regularized GNN model to investigate the brain region related to a neurological disorder from t-f MRI data for ASD or HC classification.…”
Section: A Functional Connectivity Analysismentioning
confidence: 99%
“…GNNs are designed to generate embeddings for a node by aggregating features of its neighboring nodes, for either a node classification, graph classification, or link prediction task ( Zhou et al, 2020 ). In recent years, neuroimaging studies have employed task-specific variants of graph convolution networks (GCNs), which is a popular GNN model that generalizes the convolution neural network (CNN) architecture on graph-structured data ( Parisot et al, 2018 ; Zhang et al, 2018 ; Jansson and Sandström, 2020 ; Jiang et al, 2020 ; Li X. et al, 2020 ; Goli, 2021 ; Liu et al, 2021 ; Qu et al, 2021 ; Wang et al, 2021 ; Yao et al, 2021 ). The graph attention network (GAT) is another powerful GNN model which generates node embeddings by employing a self-attention mechanism, where certain nodes in the neighborhood are given more attention over others, thereby focusing on the most relevant part of the graph ( Veličković et al, 2017 ).…”
Section: Introductionmentioning
confidence: 99%