“…GNNs are designed to generate embeddings for a node by aggregating features of its neighboring nodes, for either a node classification, graph classification, or link prediction task ( Zhou et al, 2020 ). In recent years, neuroimaging studies have employed task-specific variants of graph convolution networks (GCNs), which is a popular GNN model that generalizes the convolution neural network (CNN) architecture on graph-structured data ( Parisot et al, 2018 ; Zhang et al, 2018 ; Jansson and Sandström, 2020 ; Jiang et al, 2020 ; Li X. et al, 2020 ; Goli, 2021 ; Liu et al, 2021 ; Qu et al, 2021 ; Wang et al, 2021 ; Yao et al, 2021 ). The graph attention network (GAT) is another powerful GNN model which generates node embeddings by employing a self-attention mechanism, where certain nodes in the neighborhood are given more attention over others, thereby focusing on the most relevant part of the graph ( VeliÄkoviÄ et al, 2017 ).…”