IEEE INFOCOM 2017 - IEEE Conference on Computer Communications 2017
DOI: 10.1109/infocom.2017.8057171
|View full text |Cite
|
Sign up to set email alerts
|

Coflow scheduling in input-queued switches: Optimal delay scaling and algorithms

Abstract: Abstract-A coflow is a collection of parallel flows belonging to the same job. It has the all-or-nothing property: a coflow is not complete until the completion of all its constituent flows. In this paper, we focus on optimizing coflow-level delay, i.e., the time to complete all the flows in a coflow, in the context of an N × N input-queued switch. In particular, we develop a throughput-optimal scheduling policy that achieves the best scaling of coflow-level delay as N → ∞. We first derive lower bounds on the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 18 publications
(3 citation statements)
references
References 35 publications
(61 reference statements)
0
3
0
Order By: Relevance
“…Because of the practical issues, a scheduling algorithm for stochastic real-time jobs in unreliable workers is crucial in distributed computing networks. The most relevant works to ours are [11,12]. While [11] focused on homogeneous stochastic jobs in the coflow model, [12] extended to a heterogeneous case.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Because of the practical issues, a scheduling algorithm for stochastic real-time jobs in unreliable workers is crucial in distributed computing networks. The most relevant works to ours are [11,12]. While [11] focused on homogeneous stochastic jobs in the coflow model, [12] extended to a heterogeneous case.…”
Section: Introductionmentioning
confidence: 99%
“…The most relevant works to ours are [11,12]. While [11] focused on homogeneous stochastic jobs in the coflow model, [12] extended to a heterogeneous case. The fundamental difference between those relevant works and ours is that we consider stochastic real-time jobs and unreliable workers.…”
Section: Introductionmentioning
confidence: 99%
“…Even for values as low as d = 2, these policies significantly outperform randomized splitting (which corresponds to d = 1) [4], [11]. In case of batch arrivals, the communication burden can be further amortized over multiple packets [15]. Power-of-d policies also extend to heterogeneous scenarios and loss systems (rather than singleserver queueing settings) [6]- [8].…”
Section: Introductionmentioning
confidence: 99%