Proceedings of the 4th Annual Symposium on Cloud Computing 2013
DOI: 10.1145/2523616.2523620
|View full text |Cite
|
Sign up to set email alerts
|

Small is better

Abstract: Public clouds have become a popular platform for building Internet-scale applications. Using virtualization, public cloud services grant customers full control of guest operating systems and applications, while service providers still retain the management of their host infrastructure. Because applications built with public clouds are often highly sensitive to response time, infrastructure builders strive to reduce the latency of their data center's internal network. However, most existing solutions require mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 48 publications
(2 citation statements)
references
References 41 publications
(55 reference statements)
0
2
0
Order By: Relevance
“…High IOPS requires high QD for massive internal parallelism [17], while low QD should be maintained for low latencies. On the other hand, the consistent low latency feature enables ULL SSDs to keep a low queue depth relatively easily and not suffer the long tail problem [18]. Data center storage systems will benefit most from this unique feature to enable predictably fast services.…”
Section: Predictably Fast Storagementioning
confidence: 99%
“…High IOPS requires high QD for massive internal parallelism [17], while low QD should be maintained for low latencies. On the other hand, the consistent low latency feature enables ULL SSDs to keep a low queue depth relatively easily and not suffer the long tail problem [18]. Data center storage systems will benefit most from this unique feature to enable predictably fast services.…”
Section: Predictably Fast Storagementioning
confidence: 99%
“…HULL [1] addresses the problem of delays from long network switch queues by rate limiting and shifting the queueing to the end hosts. Xu et al [23] also address this problem, but do so using network prioritization. Both papers allow low bandwidth workloads to quickly pass through the network switch, but do not address how to deal with higher bandwidth workloads with different end-to-end latency SLOs.…”
Section: Previous Workmentioning
confidence: 99%