2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) 2016
DOI: 10.1109/focs.2016.74
|View full text |Cite
|
Sign up to set email alerts
|

A New Framework for Distributed Submodular Maximization

Abstract: A wide variety of problems in machine learning, including exemplar clustering, document summarization, and sensor placement, can be cast as constrained submodular maximization problems. A lot of recent effort has been devoted to developing distributed algorithms for these problems. However, these results suffer from high number of rounds, suboptimal approximation ratios, or both. We develop a framework for bringing existing algorithms in the sequential setting to the distributed setting, achieving near optimal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…Motivated by the growing data size, extensive effort has been made to develop distributed algorithms for SMCC using the MapReduce (MR) framework, e.g. Mirzasoleiman et al (2013); Mirrokni and Zadimoghaddam (2015); Barbosa et al (2016); Kazemi et al (2021) (see Table 1). The proposed distributed algorithms of Barbosa et al (2015) and Barbosa et al (2016) provide a way to utilize a given centralized, approximation algorithm ALG in a distributed setting, where ALG is executed on the distributed data over a constant number of MR rounds to yield a constant approximation.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Motivated by the growing data size, extensive effort has been made to develop distributed algorithms for SMCC using the MapReduce (MR) framework, e.g. Mirzasoleiman et al (2013); Mirrokni and Zadimoghaddam (2015); Barbosa et al (2016); Kazemi et al (2021) (see Table 1). The proposed distributed algorithms of Barbosa et al (2015) and Barbosa et al (2016) provide a way to utilize a given centralized, approximation algorithm ALG in a distributed setting, where ALG is executed on the distributed data over a constant number of MR rounds to yield a constant approximation.…”
Section: Introductionmentioning
confidence: 99%
“…Mirzasoleiman et al (2013); Mirrokni and Zadimoghaddam (2015); Barbosa et al (2016); Kazemi et al (2021) (see Table 1). The proposed distributed algorithms of Barbosa et al (2015) and Barbosa et al (2016) provide a way to utilize a given centralized, approximation algorithm ALG in a distributed setting, where ALG is executed on the distributed data over a constant number of MR rounds to yield a constant approximation. In order for their theoretical guarantees to hold, ALG must satisfy a consistency property (formally defined in Property 1).…”
Section: Introductionmentioning
confidence: 99%
“…From a more practical perspective, submodular maximization problems have found uses in social networks [52,60], vision [12,56], machine learning [64,65,66,72,73] and many other areas (the reader is referred, for example, to a comprehensive survey by Bach [5]). Submodular maximization has also been studied in various computational models, such as online and secretary (random arrival) settings [8,15,19,45], streaming [2,21,41,44,59,71], parallel computation [6,7,23,24] and distributed computing [74,83]. Some works also took a game theoretic perspective on submodular optimization [3,32,77].…”
Section: Introductionmentioning
confidence: 99%
“…Recent applications of submodular function maximization to large data sets, and technological trends, have motivated new directions of research. These include the study of faster algorithms in the sequential model of computation [2,34,19,35,27], algorithms in distributed setting [33,29,32,8,9,30], and algorithms in the streaming setting [3,14,18]. Barbosa et al [9] developed a general technique to obtain a constant round algorithm in the MapReduce model of computation that gets arbitrarily close to the approximation achievable in the sequential setting.…”
Section: Introductionmentioning
confidence: 99%