2015
DOI: 10.1007/978-3-319-26784-5_15
|View full text |Cite
|
Sign up to set email alerts
|

Strong Localization in Personalized PageRank Vectors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 20 publications
0
12
0
Order By: Relevance
“…Before concluding, we discuss implications of our work regarding the topic of eigenvector localization in complex networks, which is an important topic in network science [42,43] for the study of centrality [44][45][46], spatial analysis [47], and core-periphery structure [48,49]. In particular, there is growing interest in extending these ideas to time-varying [50] and multilayer networks [51].…”
Section: Discussionmentioning
confidence: 97%
“…Before concluding, we discuss implications of our work regarding the topic of eigenvector localization in complex networks, which is an important topic in network science [42,43] for the study of centrality [44][45][46], spatial analysis [47], and core-periphery structure [48,49]. In particular, there is growing interest in extending these ideas to time-varying [50] and multilayer networks [51].…”
Section: Discussionmentioning
confidence: 97%
“…When the graph is strongly connected π (i) is non-zero for all nodes. Nevertheless, we can obtain a good approximation by truncating small elements to zero since most of the probability mass in the personalized PageRank vectors π (i) is localized on a small number of nodes [5,23,36]. Thus, we can approximate π (i) with a sparse vector and in turn approximate Π ppr with a sparse matrix.…”
Section: Personalized Pagerank and Localizationmentioning
confidence: 99%
“…Out of the plethora of algorithms for PageRank computations, the family of Gauss-Southwell methods [34,3,10,22,23,35,45] and other versions of coordinate descent (see e.g., [15]), clearly stand out due to their rapid convergence, often much faster than power iterations. Such methods have been originally proposed in 1940-s for solving linear systems by greedy elimination of the absolute value of the residual [41,42].…”
Section: Related Literaturementioning
confidence: 99%
“…In this case random walk typically stays close to the restart node, which makes this model useful for local graph clustering [3,43], similarity measures [4] and semi-supervised learning [6]. Several algorithms [3,43,35,47] take advantage of localization of PPR to compute its sparse approximation. Moreover, recently developed FAST-PPR [31] achieves even faster convergence by combining the Gauss-Southwell-type method from [2] with random walks.…”
Section: Related Literaturementioning
confidence: 99%