Proceedings of the 13th International Conference on Web Search and Data Mining 2020
DOI: 10.1145/3336191.3371843
|View full text |Cite
|
Sign up to set email alerts
|

A Structural Graph Representation Learning Framework

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 28 publications
0
18
0
Order By: Relevance
“…where N is the total number of graphs in the training set, K = N (N − 1)/2 is the total number of pairs from the training set, y i j is the ground-truth label for the pair of graphs G i and G j where y i j = 1 for similar pairs and y i j = −1 for dissimilar pairs, and s i j is the similarity score estimated by the model. More general forms of higher-order information [e.g., motifs (Ahmed et al 2015(Ahmed et al , 2017b] have been used for learning graph representations (Rossi et al 2018(Rossi et al , 2020a and would likely benefit the learning.…”
Section: Siamese Gnn Models For Graph Similarity Learningmentioning
confidence: 99%
“…where N is the total number of graphs in the training set, K = N (N − 1)/2 is the total number of pairs from the training set, y i j is the ground-truth label for the pair of graphs G i and G j where y i j = 1 for similar pairs and y i j = −1 for dissimilar pairs, and s i j is the similarity score estimated by the model. More general forms of higher-order information [e.g., motifs (Ahmed et al 2015(Ahmed et al , 2017b] have been used for learning graph representations (Rossi et al 2018(Rossi et al , 2020a and would likely benefit the learning.…”
Section: Siamese Gnn Models For Graph Similarity Learningmentioning
confidence: 99%
“…It treats graph diffusion kernels as probability distributions over networks and calculates embeddings by using characteristic functions of the distributions. HONE [133] analyzes motifs of a weighted graph where an edge's weight is the count of the co-occurrences of the two endpoints in a specific motif. The main limitation of these matrix factorization methods is low computational efficiency resulting from calculating pair-wise node similarity.…”
Section: Role Discovery Modelsmentioning
confidence: 99%
“…Role-aware models embed structurally similar nodes close in the latent space, independent of network position [4], [45]. A few approaches [33] employ strict definitions of structural equivalence to embed nodes with identical local structures to the same point in the latent space, while others utilize structural node features (e.g., node degrees, motif count statistics) to extend classical proximity-preserving embedding methods, e.g., feature-based matrix factorization [46] and random walk methods [5]. Notably, a few methods design structural GCNs via motif adjacency matrices [34], [35], [47].…”
Section: I R E L At E D W O R Kmentioning
confidence: 99%