Proceedings of the 5th International Workshop on Serverless Computing 2019
DOI: 10.1145/3366623.3368135
|View full text |Cite
|
Sign up to set email alerts
|

Towards Serverless as Commodity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(16 citation statements)
references
References 4 publications
0
10
0
Order By: Relevance
“…Eismann et al [37] demonstrated the benefits and challenges that arise in the performance testing of microservices and how to manage the unique complications that arise while doing so. Kaviani et al [38] discusses the effectiveness of several key components of Knative and its contribution to opensource serverless computing platforms. They found the Knative autoscaler highly effective and mature for modern workloads.…”
Section: Related Workmentioning
confidence: 99%
“…Eismann et al [37] demonstrated the benefits and challenges that arise in the performance testing of microservices and how to manage the unique complications that arise while doing so. Kaviani et al [38] discusses the effectiveness of several key components of Knative and its contribution to opensource serverless computing platforms. They found the Knative autoscaler highly effective and mature for modern workloads.…”
Section: Related Workmentioning
confidence: 99%
“…Kubernetes is for cloud native applications an extension of what the operating system is for traditional applications. It is becoming the de facto standard for Platform as a Service [3], abstracting computational infrastructure and standardizing deployment, so that an application can run unmodified on sites across the globe. Scientific applications are routinely deployed on Kubernetes [4][5][6], and even HPC use cases are being investigated [7].…”
Section: Why Kubernetes?mentioning
confidence: 99%
“…Based on this, we define 3 QoS classes, which serve as Service Level Objectives (SLO), against the Service Level Indicator (SLI) of response latency under load. Because the latency depends on the complexity of each website, which is in the hands of the website admins and not the infrastructure, the SLO is met not by defining a set value of the SLI, but by asserting that the SLI be stable 3 . The 3 QoS classes are:…”
Section: Service Level Objectivesmentioning
confidence: 99%
“…Fixing the number of containers in an application agnostic manner will result in SLO violations, especially for functions with strict response latencies. Also the schedulers used in existing open-source platforms like Fission [14], Knative [59] use horizontal pod autoscaler which are not aware of application execution times to employ queuing. Key takeaway: Based on SLOs, cold-start latencies and execution times of applications, queuing functions can minimize the number of containers spawned without violating SLOs.…”
Section: Cold-start Latency For Singlementioning
confidence: 99%