2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) 2016
DOI: 10.1109/focs.2016.44
|View full text |Cite
|
Sign up to set email alerts
|

On Fully Dynamic Graph Sparsifiers

Abstract: We initiate the study of fast dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a (1 ± )-spectral sparsifier with amortized update time poly(log n, −1 ). Second, we give a fully dynamic algorithm for maintaining a (1 ± )-cut sparsifier with worst-case update time po… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
80
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 29 publications
(81 citation statements)
references
References 56 publications
1
80
0
Order By: Relevance
“…This result is similar to the ones from [36], and improves the dynamic sparsification algorithms in [35] by a factor of log 2 n. Furthermore, its running time is within O(log log n) factor of numerically oriented routines based on random projections and solving linear systems [11]. As a result, we're optimistic about the practical potential of this sparsification approach.…”
supporting
confidence: 69%
See 1 more Smart Citation
“…This result is similar to the ones from [36], and improves the dynamic sparsification algorithms in [35] by a factor of log 2 n. Furthermore, its running time is within O(log log n) factor of numerically oriented routines based on random projections and solving linear systems [11]. As a result, we're optimistic about the practical potential of this sparsification approach.…”
supporting
confidence: 69%
“…This property is crucial to some uses of sparsifiers in combinatorial algorithms, such as their recent incorporation in data structures [35]. By accounting for the structure of the output of ProbabilisticSpanner given in Section 4.1, we have a similar property.…”
mentioning
confidence: 79%
“…In step 4, instead of computing the exact partial Cholesky factorization of L, we use the algorithm ApxPartialCholesky in Lemma 3.2 to obtain an approximate partial Cholesky factorization of L. However, if we pass the whole L to ApxPartialCholesky, it may change the edges in E (1) , to which we need to perform θ-deletions when deactivating them. Thus, instead, we first delete all edges in E ( 1) and pass the resulting L to ApxPartialCholesky, and then add those edges back to the approximate Schur complement S returned by it.…”
Section: The Reason That Inmentioning
confidence: 99%
“…Step 5 we can compute y C (S \ θ e) † y C for each e ∈ E (1) by recursion is that S is a Laplacian matrix whose edges are supported on C, and hence to compute y C (S \ θ e) † y C for all e ∈ E (1) is just a smaller-sized version of Problem 4.1 in which L = S and E Q = E (1) . In the rest of this subsection we give an algorithm that solves Problem 4.1 approximately.…”
Section: The Reason That Inmentioning
confidence: 99%
See 1 more Smart Citation