2019
DOI: 10.1007/978-3-030-36687-2_74
|View full text |Cite
|
Sign up to set email alerts
|

TemporalNode2vec: Temporal Node Embedding in Temporal Networks

Abstract: The goal of graph embedding is to learn a representation of graphs vertices in a latent low-dimensional space in order to encode the structural information that lies in graphs. While real-world networks evolve over time, the majority of research focuses on static networks, ignoring local and global evolution patterns. A simplistic approach consists of learning nodes embeddings independently for each time step. This can cause unstable and inefficient representations over time. We present a novel dynamic graph e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…For the temporal embedding baseline methods, we employ dynamicTriad (DT) [28] and temporalNode2vec (TN2V) [11].…”
Section: B Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the temporal embedding baseline methods, we employ dynamicTriad (DT) [28] and temporalNode2vec (TN2V) [11].…”
Section: B Baseline Methodsmentioning
confidence: 99%
“…In this work, we focus on temporal graph autoencoders, i.e. those which return sequences of embeddings [11], [12], [16], [22], [28]. These techniques enhance the performances on the inference tasks that are sensitive to the temporal dimension.…”
Section: Introductionmentioning
confidence: 99%
“…The authors of TemporalNode2vec ( Haddad et al, 2020 ) and Dyn-VGAE ( Mahdavi, Khoshraftar & An, 2019 ) propose to learn structural information of each snapshot by separate models. Haddad et al (2020) suggest to compute individual sets of random walks for each snapshot in Node2Vec fashion and learn final node embeddings jointly, while in Mahdavi, Khoshraftar & An (2019) autoencoders for each snapshot were trained in a consistent way to preserve similarity between consequent graph updates.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the described above methods concentrate on the static embeddings, so it works poorly in the temporal scenario. Haddad et al (2019) propose the adaptation of Node2vec model to the dynamic case. Authors also introduce the task-specific temporal embeddings.…”
Section: Temporal Networkmentioning
confidence: 99%