2020
DOI: 10.1609/aaai.v34i04.5993
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Network Embedding with High-Order Nonlinear Information

Abstract: Temporal network embedding, which aims to learn the low-dimensional representations of nodes in temporal networks that can capture and preserve the network structure and evolution pattern, has attracted much attention from the scientific community. However, existing methods suffer from two main disadvantages: 1) they cannot preserve the node temporal proximity that capture important properties of the network structure; and 2) they cannot represent the nonlinear structure of temporal networks. In this paper, we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…A temporal network embedding algorithm that exploits the time-constrained temporal random walk using an extension to the node2vec [4] algorithm via the skip-gram model. • HNIP [12]. A temporal network embedding algorithm that employs the high-order non-linear information as an extension to the auto-encoder-based network embedding algorithms.…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…A temporal network embedding algorithm that exploits the time-constrained temporal random walk using an extension to the node2vec [4] algorithm via the skip-gram model. • HNIP [12]. A temporal network embedding algorithm that employs the high-order non-linear information as an extension to the auto-encoder-based network embedding algorithms.…”
Section: Baselinesmentioning
confidence: 99%
“…( 4) to encode sufficient temporal information (for RESCAL we use matrix product instead of t-product). For CTDNE, HNIP, and HTNE, we adopt the same hyperparameter settings as in paper [12]. For TNE, we take the embeddings of last timestamp for evaluations.…”
Section: Experiments Setupmentioning
confidence: 99%
See 1 more Smart Citation
“…where t j and t i are the time when user j and user i received message m, respectively. The design of formula is inspired by the first-order temporal proximity [15]. Different from the original formula, we take the logarithm of time interval.…”
Section: Model Specificationmentioning
confidence: 99%
“…However, the optimal predictor trained with the training distribution may not generalize well to the test distribution when there exists a distribution shift problem. In the literature of dynamic graphs, researchers are devoted to capturing laws of network dynamics which are stable in systems [42,86,103,113,138]. Following them, we assume the conditional distribution is the same 𝑝 𝑡𝑟 (Y 𝑡 |G 1:𝑡 ) = 𝑝 𝑡𝑒 (Y 𝑡 |G 1:𝑡 ), and only consider the covariate shift problem where 𝑝 𝑡𝑟 (G 1:𝑡 ) ≠ 𝑝 𝑡𝑒 (G 1:𝑡 ).…”
Section: Spatio-temporal Distribution Shiftmentioning
confidence: 99%