Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18 2018
DOI: 10.1145/3178876.3186107
|View full text |Cite
|
Sign up to set email alerts
|

Fast and Accurate Random Walk with Restart on Dynamic Graphs with Guarantees

Abstract: Given a time-evolving graph, how can we track similarity between nodes in a fast and accurate way, with theoretical guarantees on the convergence and the error? Random Walk with Restart (RWR) is a popular measure to estimate the similarity between nodes and has been exploited in numerous applications. Many real-world graphs are dynamic with frequent insertion/deletion of edges; thus, tracking RWR scores on dynamic graphs in an efficient way has aroused much interest among data mining researchers. Recently, dyn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(16 citation statements)
references
References 27 publications
0
16
0
Order By: Relevance
“…Whereas the Laplacian is based on subtracting the degree matrix D, the degree row-normalized adjacency matrix is obtained by multiplying the inverse of D with A. We normalize the matrix row-wise (Yoon, Jin, and Kang 2018;Lovász 1993;Sonnenberg 2009) and define the degree row-normalized adjacency matrix as…”
Section: Graph Normalization and Topology Biasmentioning
confidence: 99%
“…Whereas the Laplacian is based on subtracting the degree matrix D, the degree row-normalized adjacency matrix is obtained by multiplying the inverse of D with A. We normalize the matrix row-wise (Yoon, Jin, and Kang 2018;Lovász 1993;Sonnenberg 2009) and define the degree row-normalized adjacency matrix as…”
Section: Graph Normalization and Topology Biasmentioning
confidence: 99%
“…This equation is interpreted as a propagation of scores across a graph: initial scores in the starting vector b are propagated across the graph by multiplying withà ⊤ ; since the damping factor c is smaller than 1, propagated scores converge, resulting in PageRank scores. As shown in [28], PageRank computation time is proportional to the L1 length of the starting vector b, since small L1 length of b leads to faster convergence of iteration ( ∞ i=0 (cà ⊤ ) i ) with damping factor c. Here L1 length of a vector is defined as the sum of absolute values of its entries. The L1 length of a matrix is defined as the maximum L1 length of its columns.…”
Section: Preliminariesmentioning
confidence: 99%
“…From now on, denote ∆A = B ⊤ −Ã ⊤ , the difference between transpose of normalized matrices A ⊤ andB ⊤ . Lemma 1 (Dynamic PageRank, Theorem 3.2 in [28]). Given updates ∆A in a graph during ∆t, an updated PageRank vector p(t + ∆t) is computed incrementally from a previous PageRank vector p(t) as follows:…”
Section: Preliminariesmentioning
confidence: 99%
“…They update the precomputed PPR vectors when the edges are added or deleted, and they also can be categorized into major approaches like other works. IRWR [49] updates PPR vectors based on direct solving formula, and LayFwdUpdate [83], and OSP [84] are dynamic variants of iterative methods. TrackingPPR [85] works on the bookmark coloring scheme.…”
Section: G Recent Advancesmentioning
confidence: 99%