2012
DOI: 10.1007/978-3-642-31594-7_71
|View full text |Cite
|
Sign up to set email alerts
|

A Matrix Hyperbolic Cosine Algorithm and Applications

Abstract: In this paper, we generalize Spencer's hyperbolic cosine algorithm to the matrix-valued setting. We apply the proposed algorithm to several problems by analyzing its computational efficiency under two special cases of matrices; one in which the matrices have a group structure and an other in which they have rank-one. As an application of the former case, we present a deterministic algorithm that, given the multiplication table of a finite group of size n, it constructs an expanding Cayley graph of logarithmic … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
30
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(33 citation statements)
references
References 43 publications
(78 reference statements)
2
30
0
Order By: Relevance
“…Similar results exist in [1]. We should also mention the work in [39], which corresponds to a derandomization of the randomized sampling algorithm in [13].…”
Section: Related Worksupporting
confidence: 66%
“…Similar results exist in [1]. We should also mention the work in [39], which corresponds to a derandomization of the randomized sampling algorithm in [13].…”
Section: Related Worksupporting
confidence: 66%
“…Their algorithm constructs a (1 ± )-spectral sparsifier of size O(n · poly(log n, −1 )) in nearly linear time. This result has seen several improvements in recent years [SS11,bHS16,Zou12,ZLO15]. The state of the art in the sequential model is an algorithm by Lee and Sun [LS15] that computes a (1 ± )-spectral sparsifier of size O(n −2 ) in nearly linear time.…”
Section: Dynamic Spectral Sparsifiermentioning
confidence: 99%
“…Recently, Zouzias [26] made progress in improving the running time of the spectral sparsification result of [1]; can we get a similar improvement for the 2-set algorithms presented here? Or perhaps, can we trade off the running time with randomization in those algorithms?…”
Section: Frobenius Norm Approximation Note That a Lower Bound For Thmentioning
confidence: 99%