2014
DOI: 10.1109/tnet.2013.2289382
|View full text |Cite
|
Sign up to set email alerts
|

FAST CLOUD: Pushing the Envelope on Delay Performance of Cloud Storage With Coding

Abstract: Distributed storage systems often employ erasure codes to achieve high data reliability while attaining space efficiency. Such storage systems are known to be susceptible to long tails in response time. It has been shown that in modern online applications such as Bing, Facebook, and Amazon, the long tail of latency is of particular concern, with 99.9th percentile response times that are orders of magnitude worse than the mean. Taming tail latency is very challenging in erasure-coded storage systems since quant… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
105
2

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(109 citation statements)
references
References 25 publications
2
105
2
Order By: Relevance
“…where E[V j ] and E[V 2 j ] are given in (19), and f j is the limiting probability that an arbitrary job makes type-j service start.…”
Section: Download With Any Availabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…where E[V j ] and E[V 2 j ] are given in (19), and f j is the limiting probability that an arbitrary job makes type-j service start.…”
Section: Download With Any Availabilitymentioning
confidence: 99%
“…where E[V j ] and E[V 2 j ] for j ∈ {0, 1, 2} follow from (19). We next estimate f j under high-tra c. Consider a join queue at the tail of the system, in which the tasks that nish service wait for their siblings.…”
Section: Simplex Eue For Availability Twomentioning
confidence: 99%
See 1 more Smart Citation
“…A queuing model closely related to erasure-coded storage is the fork-join queue [15] which has been extensively studied in the literature. Recently, in [2], the authors proposed a heuristic transmission scheme using this Fork-join queuing model where a file request is forked to all n storage nodes that host the file chunks, and it exits the system when any k chunks are processed to dynamically tuning coding parameters to improve latency performance. In [4] the authors proposed a self-adaptive strategy which can dynamically adjusting chunk size and number of redundancy requests according to dynamic workload status in erasurecoded storage systems to minimize queuing delay in forkjoin queues.…”
Section: B Related Workmentioning
confidence: 99%
“…As a result, researchers have proposed several approaches to achieve better latency performance while living with this high variability. One of the most promising approaches is that of scheduling redundant requests to multiple components or servers [1]- [5]. That overheads.…”
Section: Introductionmentioning
confidence: 99%