Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems 2021
DOI: 10.1145/3445814.3446714
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking, analysis, and optimization of serverless function snapshots

Abstract: Serverless computing has seen rapid adoption due to its high scalability and flexible, pay-as-you-go billing model. In serverless, developers structure their services as a collection of functions, sporadically invoked by various events like clicks. High inter-arrival time variability of function invocations motivates the providers to start new function instances upon each invocation, leading to significant cold-start delays that degrade user experience. To reduce cold-start latency, the industry has turned to … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
32
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 75 publications
(37 citation statements)
references
References 30 publications
(21 reference statements)
1
32
0
Order By: Relevance
“…Fork-based startup [51,54,78] and snapshot-based startup [11, 16,35,54,91,92] are the two most widely adopted optimizations for reducing startup latency. Molecule follows the line of research, with two new contributions.…”
Section: Optimizing Startup Latencymentioning
confidence: 99%
See 3 more Smart Citations
“…Fork-based startup [51,54,78] and snapshot-based startup [11, 16,35,54,91,92] are the two most widely adopted optimizations for reducing startup latency. Molecule follows the line of research, with two new contributions.…”
Section: Optimizing Startup Latencymentioning
confidence: 99%
“…As shown in Figure 15 (a), snapshot (or checkpoint and restore) and fork are the two most widely adopted optimizations for reducing startup latency. For example, Replayable Execution [92] and FireCracker [91] leverage prepared snapshots to mitigate the application initialization cost. Catalyzer leverages With Molecule, these steps are sufficient to enable GPU for serverless computing, and GPU functions can seamlessly cooperate with CPU, DPU and FPGA functions.…”
Section: Comparison With State-of-the-art Systemsmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, we use our staged approach ( §6) to find a good approximation. Some systems use observations of past memory accesses or past working sets (e.g., from prior invocations of a program) to perform targeted prefetching [33,35,56,77,92] and approx-imate Belady's algorithm (MIN) [72]. SC's obliviousness and our memory programming approach allow MAGE to compute the memory access pattern without first running the program, and then apply these techniques using the access pattern itself.…”
Section: Related Workmentioning
confidence: 99%