Proceedings of the Seventeenth European Conference on Computer Systems 2022
DOI: 10.1145/3492321.3527539
|View full text |Cite
|
Sign up to set email alerts
|

Jiffy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 21 publications
0
4
0
Order By: Relevance
“…If so, over-allocation of resources is done, while a cost-effective resource is allocated if the job is not latency-sensitive. Finally, Khandelwal et al [34] proposed Jiffy, which combines profiling process of the Pocket service [50] and a multiplexing method to adjust the resource provisioning method. This approach enables sharing of available capacity across concurrently running jobs, ensuring efficient resource utilization.…”
Section: Application Profilingmentioning
confidence: 99%
“…If so, over-allocation of resources is done, while a cost-effective resource is allocated if the job is not latency-sensitive. Finally, Khandelwal et al [34] proposed Jiffy, which combines profiling process of the Pocket service [50] and a multiplexing method to adjust the resource provisioning method. This approach enables sharing of available capacity across concurrently running jobs, ensuring efficient resource utilization.…”
Section: Application Profilingmentioning
confidence: 99%
“…In contrast, our solution does not need any information on resource demand. Also, as shown in [7,16,17], intermediate data sizes can consistently vary during the workload execution, resulting in the well-understood problem of potential performance degradation and/or resource underutilization [11,20]. All these works involve indirect communication, demanding two serial data copies over the network in the critical path: one from producer function to shared storage and one from shared storage to consumer function.…”
Section: Related Workmentioning
confidence: 99%
“…The slow data transfers between functions make the CloudSort benchmark to be up to 500× slower when executed on AWS Lambda with S3 instead of on a cluster of Virtual Machines (VMs). Recent studies tackle this problem by implementing optimized exchange operators [13,15], using multi-tier storage combining slow with fast storage or solely remote in-memory storage [7,8,16], exploiting per-node caches [2,19], co-locating functions on a single container [1,5,9,18], handling external storage on long-running VMs [3,22], or circumventing the network constraints [21]. However, these methods either use domain-specific optimizations, require two copies of data over the network, are not fully transparent to the user, break the advantage of fine-grained scaling, or use non-serverless components.…”
Section: Introductionmentioning
confidence: 99%
“…Functions also are not directly addressable, which forces them to communicate through non-local storage services (e.g. Amazon S3, Azure Blob Storage); provisioned "far-memory" key-value stores [64], [88], [152]; or queueing services [65].…”
Section: Background and Motivationmentioning
confidence: 99%
“…Some approaches to address this problem include (1) adding faster -but still remote -keyvalue stores such as in Pocket [64], Jiffy [152], REDIS [88], Wukong [153], or R2E2 [154]; (2) using a serverful orchestrator to manage long serverless invocations as short-lived containers [14],…”
Section: Introductionmentioning
confidence: 99%