Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3357815
|View full text |Cite
|
Sign up to set email alerts
|

Deep Graph Similarity Learning for Brain Data Analysis

Abstract: In many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification, clustering, and similarity search. Recently, there has been an increasing interest in deep graph similarity learning, where the key idea is to learn a deep learning model that maps input graphs to a target space such that the distance in the target space approximates the structural distance in the input space. H… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 45 publications
(30 citation statements)
references
References 93 publications
(85 reference statements)
0
28
0
Order By: Relevance
“…During evaluation, most GSL methods take pairs or triplets of graphs as input during training with various objective functions used for various graph similarity tasks. The existing evaluation tasks mainly include pair classification (Xu et al 2017;Ktena et al 2018;Ma et al 2019;Li et al 2019;Fey et al 2019), graph classification (Tixier et al 2019;Nikolentzos et al 2017;Narayanan et al 2017;Atamna et al 2019;Wu et al 2018;Wang et al 2019a;Liu et al 2019b;Yanardag and Vishwanathan 2015;Al-Rfou et al 2019;Du et al 2019), graph clustering (Wang et al 2019a), graph distance prediction (Bai et al 2018(Bai et al , 2019aFey et al 2019), and graph similarity search (Wang et al 2019c). Classification AUC (i.e., Area Under the ROC Curve) or accuracy are used as the most popular metric for the evaluation of graph-pair classification or graph classification task (Ma et al 2019;Li et al 2019).…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…During evaluation, most GSL methods take pairs or triplets of graphs as input during training with various objective functions used for various graph similarity tasks. The existing evaluation tasks mainly include pair classification (Xu et al 2017;Ktena et al 2018;Ma et al 2019;Li et al 2019;Fey et al 2019), graph classification (Tixier et al 2019;Nikolentzos et al 2017;Narayanan et al 2017;Atamna et al 2019;Wu et al 2018;Wang et al 2019a;Liu et al 2019b;Yanardag and Vishwanathan 2015;Al-Rfou et al 2019;Du et al 2019), graph clustering (Wang et al 2019a), graph distance prediction (Bai et al 2018(Bai et al , 2019aFey et al 2019), and graph similarity search (Wang et al 2019c). Classification AUC (i.e., Area Under the ROC Curve) or accuracy are used as the most popular metric for the evaluation of graph-pair classification or graph classification task (Ma et al 2019;Li et al 2019).…”
Section: Discussionmentioning
confidence: 99%
“…There are three main categories of deep graph similarity learning methods (see Fig. 1a): (1) graph embedding based methods, which apply graph embedding techniques to obtain node-level or graph-level representations and further use the representations for similarity learning (Tixier et al 2019;Nikolentzos et al 2017;Narayanan et al 2017;Atamna et al 2019;Wu et al 2018;Wang et al 2019a;Xu et al 2017;Liu et al 2019b); (2) graph neural network (GNN) based models, which are based on using GNNs for similarity learning, including GNN-CNNs (Bai et al 2018(Bai et al , 2019a, Siamese GNNs (Ktena et al 2018;Ma et al 2019;Liu et al 2019a;Wang et al 2019c;Chaudhuri et al 2019) and GNN-based graph matching networks (Li et al 2019;Ling et al 2019;Bai et al 2019b;Wang et al 2019b;Jiang et al 2019;Guo et al 2018); and (3) deep graph kernels that first map graphs into a new feature space, where kernel functions are defined for similarity learning on graph pairs, including sub-structure based deep kernels (Yanardag and Vishwanathan 2015) and deep neural Du et al 2019). In the meantime, different methods may use different types of features in the learning process.…”
Section: Taxonomy Of Modelsmentioning
confidence: 99%
See 3 more Smart Citations