Proceedings of the Conference of the ACM Special Interest Group on Data Communication 2017
DOI: 10.1145/3098822.3098840
|View full text |Cite
|
Sign up to set email alerts
|

Credit-Scheduled Delay-Bounded Congestion Control for Datacenters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
63
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 180 publications
(63 citation statements)
references
References 27 publications
0
63
0
Order By: Relevance
“…We see that DCTCP takes 6 RTTs (480 µs) to converge to fair share whereas Slytherin converges in 4 RTTs (320 µs). Because Slytherin provides faster convergence, it effectively mitigates utilization and fairness issues in multibottleneck scenarios, as reported in prior studies [21].…”
Section: Convergence Timementioning
confidence: 77%
“…We see that DCTCP takes 6 RTTs (480 µs) to converge to fair share whereas Slytherin converges in 4 RTTs (320 µs). Because Slytherin provides faster convergence, it effectively mitigates utilization and fairness issues in multibottleneck scenarios, as reported in prior studies [21].…”
Section: Convergence Timementioning
confidence: 77%
“…Despite these innovative ideas, because these schemes tackle the general case with arbitrarily changing number of flows which interact in arbitrary ways, the schemes rely on slow, iterative convergence to the appropriate sending rates. As discussed in Section 1, other schemes, including EyeQ [28], NumFabric [40] and ExpressPass [8], also rely on iterative convergence. Such convergence requires many round trips (e.g., 50 RTTs in TIMELY, 31 RTTs in NUMFabric, and 25-30 RTTs in EyeQ), as illustrated in Figure 1 for a sender whose initial sending rate is 100% of the line rate and the target rate is 50%.…”
Section: Challengesmentioning
confidence: 99%
“…(3) NUMFabric [40] achieves more flexible and faster bandwidth allocation than TCP but still employs iterative convergence (e.g., 31 RTTs). And, (4) while ExpressPass [8] and NDP [23] target general congestion via receiver-based congestion control, neither scheme isolates receiver congestion. ExpressPass employs BIC-TCP iterative convergence which takes 20 RTTs for a datacenter network (Section 5.1); ExpressPass shows results only for a simple network.…”
Section: Introductionmentioning
confidence: 99%
“…Receivers prioritize the flow with fewest remaining bytes when assigning tokens, achieving near optimal performance. ExpressPass [170] is another technique that uses receiver-side credit packets to control sender-side rate; credit packets can be lost without consequence, and the loss rate is used to gauge the connection capacity. Homa [171] is a new connectionless protocol inspired by pHost and ExpressPass, which can significantly reduce latency with no network support.…”
Section: G Datacenter Network and The Incast Issuementioning
confidence: 99%