Proceedings of the 11th ACM Symposium on Cloud Computing 2020
DOI: 10.1145/3419111.3421280
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing serverless platforms with serverlessbench

Abstract: Serverless computing promises auto-scalability and costefficiency (in "pay-as-you-go" manner) for high-productive software development. Because of its virtue, serverless computing has motivated increasingly new applications and services in the cloud. This, however, also presents new challenges including how to efficiently design high-performance serverless platforms and how to efficiently program on the platforms. This paper proposes ServerlessBench, an open-source benchmark suite for characterizing serverless… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
75
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 113 publications
(76 citation statements)
references
References 19 publications
1
75
0
Order By: Relevance
“…vHive adopts dockerized benchmarks from FunctionBench that provides a diverse set of Python benchmarks [32,33]. Server-lessBench contains a number of multi-function benchmarks, focusing on function composition patterns [65]. Researchers and practitioners release a range of systems that combine the FaaS programming model and autoscaling [8-10, 30, 36].…”
Section: Related Work 81 Open-source Serverless Platformsmentioning
confidence: 99%
“…vHive adopts dockerized benchmarks from FunctionBench that provides a diverse set of Python benchmarks [32,33]. Server-lessBench contains a number of multi-function benchmarks, focusing on function composition patterns [65]. Researchers and practitioners release a range of systems that combine the FaaS programming model and autoscaling [8-10, 30, 36].…”
Section: Related Work 81 Open-source Serverless Platformsmentioning
confidence: 99%
“…For the former, we use 19 multimedia processing functions, available online (see our code repository in §1). For multi-stage functions, we study four applications: two data analytics applications as in [23] (a MapReduce-based "word count" application; Thousand Island Scanner (THIS) [33], as well as (in §7.2), a distributed video processing benchmark), a cloud-based Illegitimate Mobile App Detector (IMAD) application [45] 4 , and an image thumbnail generator pipeline (Image Processing) from the ServerlessBench suite [47].…”
Section: Discussionmentioning
confidence: 99%
“…Similar to Maissen, they also found different CPU models executing the cloud function, which could also explain the scatter plot in Figure 5 in [55] (respectively Figure 4 in [53]), where two performance ranges for LINPACK were visible. The last included benchmarking paper in the SLR is another workbench, called ServerlessBench [56]. They included an experiment with an multi-threaded application compared to parallel function instances.…”
Section: Related Approachesmentioning
confidence: 99%
“…PaaS where single functionality in the scope of a cloud function is hidden within a larger application/microservice. Therefore, the multithreading aspect when optimizing performance and costs of cloud function is important to consider for FaaS experiments when using multi-core configurations [9,56]. This is different to long running applications in a PaaS area where multi-threaded implementations of pieces of functionality are suspended for code readability and maintenance which is also addressed by static code quality tools 21 .…”
Section: Checklist For Faas Benchmarkingmentioning
confidence: 99%