Proceedings of the 2011 SIAM International Conference on Data Mining 2011
DOI: 10.1137/1.9781611972818.15
|View full text |Cite
|
Sign up to set email alerts
|

Clustered low rank approximation of graphs in information science applications

Abstract: In this paper we present a fast and accurate procedure called clustered low rank matrix approximation for massive graphs. The procedure involves a fast clustering of the graph and then approximates each cluster separately using existing methods, e.g. the singular value decomposition, or stochastic algorithms. The cluster-wise approximations are then extended to approximate the entire graph. This approach has several benefits: (1) important community structure of the graph is preserved due to the clustering; (2… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
46
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 43 publications
(47 citation statements)
references
References 31 publications
1
46
0
Order By: Relevance
“…This algorithm may be viewed as post-processing the relative-error random sampling algorithm of the previous subsection to remove redundant columns; and it has been applied successfully to a range of data analysis problems. See, e.g., [26,27,28] and [155,156,157], as well as [158] for a discussion of numerical issues associated with this algorithm.…”
Section: A Two-stage Hybrid Algorithm For This Problemmentioning
confidence: 99%
“…This algorithm may be viewed as post-processing the relative-error random sampling algorithm of the previous subsection to remove redundant columns; and it has been applied successfully to a range of data analysis problems. See, e.g., [26,27,28] and [155,156,157], as well as [158] for a discussion of numerical issues associated with this algorithm.…”
Section: A Two-stage Hybrid Algorithm For This Problemmentioning
confidence: 99%
“…In this section, we describe the clustered low-rank approximation method proposed in [25], and introduce the problem of link prediction in social network analysis. Throughout the paper, we use capital letters to represent matrices, lower-case bold letters to represent vectors, and lower-case italics to represent scalars.…”
Section: Preliminariesmentioning
confidence: 99%
“…There are a number of benefits of clustered low rank approximation compared to spectral regular low-rank approximation: (1) the clustered low-rank approximation preserves important structural information of a network by extracting a certain amount of information from all of the clusters; (2) it has been shown that the clustered low-rank approximation achieves a lower relative error than the truncated SVD with the same amount of memory [25]; (3) it also has been shown that even a sequential implementation of clustered low rank approximation [25] is faster than state-of-the-art algorithms for low-rank matrix approximation [20]; (4) improved accuracy of clustered low-rank approximation contributes to improved performance of end tasks, e.g., prediction of new links in social networks [26] and group recommendation to community members [29].…”
Section: Clustered Low-rank Approximationmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that with c = 1, CSGE becomes the regular SGE. In related preliminary evaluation, we explored the benet of combining clustering with different local (within-cluster) low rank approximation schemes from the viewpoint of numerical accuracy on a few static graphs [38]. However, higher numerical accuracy in matrix approximations does not necessarily translate to benets in end applications, such as link prediction.…”
Section: Basic Algorithmmentioning
confidence: 99%