2019
DOI: 10.48550/arxiv.1905.00987
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Network Representation Learning: Consolidation and Renewed Bearing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Link predictions models are trained using 20 randomized train-test splits. Following prior works [9] for link prediction tasks, we split the dataset into training and test data using the following procedure. We first randomly sample 20%…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Link predictions models are trained using 20 randomized train-test splits. Following prior works [9] for link prediction tasks, we split the dataset into training and test data using the following procedure. We first randomly sample 20%…”
Section: Methodsmentioning
confidence: 99%
“…Quality: To evaluate embedding quality on the link prediction task, we use the AUROC and F1-Score metrics following the procedures described in [9]. We train a logistic regression model as a classifier for positive/negative edges using an equal number of positive and negative edges randomly selected from the training set.…”
Section: Metricsmentioning
confidence: 99%
“…The Adam Optimizer is used for model training. Evaluation Metrics: We evaluate the quality of the embeddings through multi-label node classification (Perozzi, Al-Rfou, and Skiena 2014;Grover and Leskovec 2016) and link prediction (Gurukar, Vijayan et al 2019). Specifically, for node classification, we run a 10-fold cross validation using the embeddings as features and report the average Micro-F1 and average Macro-F1.…”
Section: Experiments and Analysismentioning
confidence: 99%
“…As an effort of generalizing convolution neural network to the graphs and manifolds, graph neural networks is proposed to analyze graph-structured data. They have achieved state-of-the-art performance in node classification (Kipf & Welling, 2016), knowledge graph completion (Schlichtkrull et al, 2018), link prediction (Dettmers et al, 2018;Gurukar et al, 2019), combinatorial optimization (Li et al, 2018b;Khalil et al, 2017), property prediction (Duvenaud et al, 2015;Xie & Grossman, 2018) and physics simulation (Sanchez-Gonzalez et al, 2020).…”
Section: Related Workmentioning
confidence: 99%