Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing 2018
DOI: 10.1145/3188745.3188764
|View full text |Cite
|
Sign up to set email alerts
|

Round compression for parallel matching algorithms

Abstract: The version presented here may differ from the published version or, version of record, if you wish to cite this item you are advised to consult the publisher's version. Please see the 'permanent WRAP url' above for details on accessing the published version and note that access may require a subscription.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
45
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 55 publications
(46 citation statements)
references
References 35 publications
(86 reference statements)
1
45
0
Order By: Relevance
“…To work on truly massive inputs, we ideally want our local space to be sublinear in input size. Some earlier works (e.g., [4,15,19]) study the regime where space is roughly the number of nodes of the graph (i.e., = Θ( )); here, though, we consider the more restrictive low space regime. That is, we give fully scalable algorithms which use only = Θ ( ) space, for any constant > 0.…”
Section: The Mpc Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…To work on truly massive inputs, we ideally want our local space to be sublinear in input size. Some earlier works (e.g., [4,15,19]) study the regime where space is roughly the number of nodes of the graph (i.e., = Θ( )); here, though, we consider the more restrictive low space regime. That is, we give fully scalable algorithms which use only = Θ ( ) space, for any constant > 0.…”
Section: The Mpc Modelmentioning
confidence: 99%
“…The key idea here is to perform deterministic round compression on Luby's algorithm. Round compression is a technique used in some randomized results in MPC and CONGESTED CLIQUE (e.g., [15,19,20], though only the former uses the term), which works by gathering enough information onto machines to simulate multiple steps of a LOCAL or CONGEST algorithm at once.…”
Section: Our Approachmentioning
confidence: 99%
“…Massively Parallel Algorithms. There has been a huge body of work on developing efficient, low round-complexity distributed graph algorithms in the Massively Parallel Computation model [3,5,6,9,15,17,18,22,30,36,46,49,60], including many other papers. In this paper, our focus is on recently proposed Adaptive Massively Parallel Computation model [19], which was also recently studied from a lower-bound perspective by Charikar et al [28].…”
Section: Related Workmentioning
confidence: 99%
“…The MPC model has been extensively studied in theory in recent years [3,5,6,9,15,17,18,22,30,36,46,49,60], and its theoretical capabilities and limitations are relatively wellunderstood. In the context of graph algorithms, a significant limitation of the model is, roughly speaking, the fact that initially each node only knows its immediate neighbors, and exploring a larger neighborhood requires multiple rounds.…”
Section: Introductionmentioning
confidence: 99%
“…Many computational problems, such as graph problems [2,7,8,9,10,11,13,14,15,20,21,27,30,32,41,44,40,49,51,59,63], clustering [16,18,36,43,65] and submodular function optimization [33,37,48,56], have been studied in this model, with an emphasis on developing algorithms that minimize the number of communication rounds. For example, recently, [46] showed that massively parallel computation can simulate dynamic programming algorithms admitting two properties, i.e.…”
Section: Introductionmentioning
confidence: 99%