2018
DOI: 10.1109/lca.2018.2839189
|View full text |Cite
|
Sign up to set email alerts
|

The Architectural Implications of Cloud Microservices

Abstract: Cloud services have recently undergone a shift from monolithic applications to microservices, with hundreds or thousands of loosely-coupled microservices comprising the end-to-end application. Microservices present both opportunities and challenges when optimizing for quality of service (QoS) and cloud utilization. In this paper we explore the implications cloud microservices have on system bottlenecks, and datacenter server design. We first present and characterize an end-to-end application built using tens o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
49
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
3
3

Relationship

1
9

Authors

Journals

citations
Cited by 88 publications
(51 citation statements)
references
References 15 publications
0
49
0
2
Order By: Relevance
“…First, we quantify how effective current datacenter architectures are at running microservices, as well as how datacenter hardware needs to change to better accommodate their performance and resource requirements (Section 4). This includes analyzing the cycle breakdown in modern servers, examining whether big or small cores are preferable [25,35,41,42,[46][47][48], determining the pressure microservices put on instruction caches [37,52], and exploring the potential they have for hardware acceleration [24,27,38,49,71]. We show that despite the small amount of computation per microservice, the latency requirements of each individual tier are much stricter than for typical applications, putting more pressure on predictably high single-thread performance.…”
Section: Cluster Management Implicationsmentioning
confidence: 99%
“…First, we quantify how effective current datacenter architectures are at running microservices, as well as how datacenter hardware needs to change to better accommodate their performance and resource requirements (Section 4). This includes analyzing the cycle breakdown in modern servers, examining whether big or small cores are preferable [25,35,41,42,[46][47][48], determining the pressure microservices put on instruction caches [37,52], and exploring the potential they have for hardware acceleration [24,27,38,49,71]. We show that despite the small amount of computation per microservice, the latency requirements of each individual tier are much stricter than for typical applications, putting more pressure on predictably high single-thread performance.…”
Section: Cluster Management Implicationsmentioning
confidence: 99%
“…Serverless computing and other container-based platforms suffer several performance and utilization barriers. There are several ways to address these problems, including datacenter design [Gan and Delimitrou 2018], resource allocation [Björkqvist et al 2016], programming abstractions [Baldini et al 2017;Rabbah 2017], edge computing [Aske and Zhao 2018], and cloud container design [Shen et al 2019]. λ is designed to elucidate subtle semantic issues (not performance problems) that affect programmers building serverless applications.…”
Section: Related Workmentioning
confidence: 99%
“…Micro services can either be deployed on virtual machines running in the data center or in the cloud infrastructure. Lot of researchers have already worked to identify the bottle necks and implications in the infrastructure design involved in the local data centers [16] [17]. MSA was designed to achieve horizontal scalability and quality of service (QoS).…”
Section: Choice Of the Right Infrastructure Platformrelated Workmentioning
confidence: 99%