2018 IEEE Information Theory Workshop (ITW) 2018
DOI: 10.1109/itw.2018.8613519
|View full text |Cite
|
Sign up to set email alerts
|

Storage, Computation, and Communication: A Fundamental Tradeoff in Distributed Computing

Abstract: Distributed computing has become one of the most important frameworks in dealing with large computation tasks. In this paper, we investigate a MapReduce like distributed computing system. Our main contribution is the characterization of the optimal tradeoff between storage space, computation load, and communication load. To this end, we derive an information-theoretical converse and show that time-and memory-sharing between the operating points achieved by the modified coded distributed computing (M-CDC) schem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 33 publications
(23 citation statements)
references
References 24 publications
0
23
0
Order By: Relevance
“…The optimal tradeoff between the computation load in the Map phase and the communication load in the Shuffle phase is derived in [3], which finds that increasing computation load of the Map phase by r can reduce communication load of the Shuffle phase by the same factor r. This idea of coded distributed computing has since been extended widely, e.g., [4]- [9]. In particular, [4], [5] propose new coded distributed computing schemes, [6] studies distributed computing with storage constraints at nodes, [7] studies distributed computing under time-varying excess computing resources, and [8], [9] studies the wireless distributed computing systems.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The optimal tradeoff between the computation load in the Map phase and the communication load in the Shuffle phase is derived in [3], which finds that increasing computation load of the Map phase by r can reduce communication load of the Shuffle phase by the same factor r. This idea of coded distributed computing has since been extended widely, e.g., [4]- [9]. In particular, [4], [5] propose new coded distributed computing schemes, [6] studies distributed computing with storage constraints at nodes, [7] studies distributed computing under time-varying excess computing resources, and [8], [9] studies the wireless distributed computing systems.…”
Section: Introductionmentioning
confidence: 99%
“…, φ Q } with |W k | = W k . Note that, unlike [3]- [6], [8]- [12], W k may vary for different k. Similar to [5], [6], [8]- [12], [14], we assume that W j ∩W k = ∅ for j = k so that each function is assigned to exactly one node. Thus, we have k∈[K] W k = Q.…”
Section: Introductionmentioning
confidence: 99%
“…A converse bound was proposed in [14] to show that the proposed coded distributed computing scheme is optimal in terms of communication load. This coded distributed computing framework was extended to the cases such as computing only necessary intermediate values [15], [16], reducing file partitions and number of output functions [16], [17], and considering random network topologies [18], stragglers [19], storage cost [20], and heterogeneous computing power, function assignment and storage space [21], [22].…”
Section: Relation To Other Problemsmentioning
confidence: 99%
“…We then add (39), (40), and (34) to obtain, 20 Hence, for each integer p ∈ [K], the bound in (42) becomes a linear function in M. When M = qp, from (42) we have R u ≥ N(K−p) (K−1)p . When M = q(p + 1), from (42) we have R u ≥ N(K−p−1) (K−1)(p+1) .…”
Section: B Proof Of Theoremmentioning
confidence: 99%
See 1 more Smart Citation