Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures 2018
DOI: 10.1145/3210377.3210386
|View full text |Cite
|
Sign up to set email alerts
|

Greedy and Local Ratio Algorithms in the MapReduce Model

Abstract: MapReduce has become the de facto standard model for designing distributed algorithms to process big data on a cluster. There has been considerable research on designing efficient MapReduce algorithms for clustering, graph optimization, and submodular optimization problems. We develop new techniques for designing greedy and local ratio algorithms in this setting. Our randomized local ratio technique gives 2-approximations for weighted vertex cover and weighted matching, and an f -approximation for weighted set… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
17
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 40 publications
0
17
0
Order By: Relevance
“…Furthermore, Parter [42] designed an MPC algorithm that uses O(n) memory per machine and finds a (∆ + 1) coloring in O(log log ∆ · log * (n)) rounds 2 . Our Result 4 improves these results significantly: both the number of used colors as well as per machine memory compared to [26], and round-complexity (with at most polylog(n)-factor more per-machine memory) compared to [42].…”
Section: Our Contributionsmentioning
confidence: 90%
See 1 more Smart Citation
“…Furthermore, Parter [42] designed an MPC algorithm that uses O(n) memory per machine and finds a (∆ + 1) coloring in O(log log ∆ · log * (n)) rounds 2 . Our Result 4 improves these results significantly: both the number of used colors as well as per machine memory compared to [26], and round-complexity (with at most polylog(n)-factor more per-machine memory) compared to [42].…”
Section: Our Contributionsmentioning
confidence: 90%
“…Starting with Luby's celebrated distributed/parallel algorithm for the maximal independent set problem [36], there have been numerous attempts in adapting these greedy algorithms to different models of computation including the models considered in this paper (see, e.g. [5,26,28,34,35,40]). Typically these adaptations require multiple passes/rounds of computation, and this is for the fundamental reason that most greedy algorithms are inherently sequential: they require accessing the input graph in an adaptive manner based on decisions made thus far, which, although limited, still results in requiring multiple passes/rounds over the input.…”
Section: Our Contributionsmentioning
confidence: 99%
“…The model appeared some years after the programming paradigm was popularized by Google [31]. It has been successfully employed in practice for massive-scale algorithms [15,19,20,41,44,48,68,69]. Algorithms in MapReduce use map and reduce functions, executed in sequence.…”
Section: Mapreduce Algorithmmentioning
confidence: 99%
“…In [32], MapReduce algorithms for well-known problems are proposed, where they show the theoretical approximation ratio is two for the minimum vertex cover problem. On the other hand, our first algorithm may lead to solutions of worse quality for special cases than the MapReduce algorithm in the literature.…”
Section: Introductionmentioning
confidence: 99%