2017 IEEE International Symposium on Workload Characterization (IISWC) 2017
DOI: 10.1109/iiswc.2017.8167770
|View full text |Cite
|
Sign up to set email alerts
|

Workload characterization of interactive cloud services on big and small server platforms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 29 publications
(13 citation statements)
references
References 31 publications
0
13
0
Order By: Relevance
“…First, we quantify how effective current datacenter architectures are at running microservices, as well as how datacenter hardware needs to change to better accommodate their performance and resource requirements (Section 4). This includes analyzing the cycle breakdown in modern servers, examining whether big or small cores are preferable [25,35,41,42,[46][47][48], determining the pressure microservices put on instruction caches [37,52], and exploring the potential they have for hardware acceleration [24,27,38,49,71]. We show that despite the small amount of computation per microservice, the latency requirements of each individual tier are much stricter than for typical applications, putting more pressure on predictably high single-thread performance.…”
Section: Cluster Management Implicationsmentioning
confidence: 99%
See 2 more Smart Citations
“…First, we quantify how effective current datacenter architectures are at running microservices, as well as how datacenter hardware needs to change to better accommodate their performance and resource requirements (Section 4). This includes analyzing the cycle breakdown in modern servers, examining whether big or small cores are preferable [25,35,41,42,[46][47][48], determining the pressure microservices put on instruction caches [37,52], and exploring the potential they have for hardware acceleration [24,27,38,49,71]. We show that despite the small amount of computation per microservice, the latency requirements of each individual tier are much stricter than for typical applications, putting more pressure on predictably high single-thread performance.…”
Section: Cluster Management Implicationsmentioning
confidence: 99%
“…Brawny vs. wimpy cores: There has been a lot of work on whether small servers can replace high-end platforms in the cloud [25,[46][47][48]. Despite the power benefits of simple cores, interactive services still achieve better latency in servers that optimize for single-thread performance.…”
Section: Architectural Implicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…to handle network interrupts across the server socket when operating at max load. Allowing LC threads to share cores with IRQ cores leads to both lower throughput and higher latency [16]. 8GB of memory is exclusively allocated to the OS.…”
Section: Characterizationmentioning
confidence: 99%
“…We evaluate the end-to-end service on two Cavium ThunderX boards (2 sockets, 48 1.8GHz inorder cores per socket, and a 16-way shared 16M LLC). The boards are connected on the same ToR switch as the rest of our cluster, and their memory, network, and OS configurations are the same as the other servers [5]. Fig.…”
Section: Movie Streamingmentioning
confidence: 99%