2011
DOI: 10.1007/978-3-642-23400-2_46
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Multi-deployment on Clouds by Means of Self-adaptive Prefetching

Abstract: Abstract. With Infrastructure-as-a-Service (IaaS) cloud economics getting increasingly complex and dynamic, resource costs can vary greatly over short periods of time. Therefore, a critical issue is the ability to deploy, boot and terminate VMs very quickly, which enables cloud users to exploit elasticity to find the optimal trade-off between the computational needs (number of resources, usage time) and budget constraints. This paper proposes an adaptive prefetching mechanism aiming to reduce the time required… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
23
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 15 publications
(23 citation statements)
references
References 9 publications
0
23
0
Order By: Relevance
“…Moreover, for the same machine configuration, the clouds offer different billing options on-demand-, reserved-, and spot-instances, which are charged differently. Scheduling enough resources to meet user demands yet keep the cost low while adapting to workload changes remains challenging, despite recent research efforts [9][10][11].…”
Section: Finding Scheduling Polices That Can Schedule Diverse Workloamentioning
confidence: 99%
See 1 more Smart Citation
“…Moreover, for the same machine configuration, the clouds offer different billing options on-demand-, reserved-, and spot-instances, which are charged differently. Scheduling enough resources to meet user demands yet keep the cost low while adapting to workload changes remains challenging, despite recent research efforts [9][10][11].…”
Section: Finding Scheduling Polices That Can Schedule Diverse Workloamentioning
confidence: 99%
“…We assume that clouds have infinite capacity. Each newly provisioned VM needs serval minutes to be booted [10,14]. An VM is charged per hour; even a factional consumption of less than one hour is counted as one hour.…”
Section: Workload and Resource Modelmentioning
confidence: 99%
“…Using a decentralized storage solution (such as a parallel file system [13][14][15] or a dedicated repository [16]) reduces contention thanks to striping, but is only partially effective in our case, because the VM instances often access the same chunks in the same order. In our previous work [7], we show how to alleviate this issue by means of adaptive prefetching, however I/O contention to the repository is still a potential problem for scalability.…”
Section: Related Workmentioning
confidence: 99%
“…Since the VM instances of multi-deployments often follow a similar access pattern, a natural idea in this context is to enable the VM instances to talk to each other and "help" each other out in order to reduce the pressure on the remote repository. Based on the observation that I/O contention leads to jitter [7] (i.e., slight differences in time when the same chunk is accessed), we propose to organize the VM instances in a peer-to-peer topology where each VM has a set of neighbors, with whom it "gossips" about the chunks that should be fetched on-demand. Based on this information, VMs are able to anticipate future trends in access pattern and obtain chunks from their neighbors before they are actually needed, effectively preventing costly remote accesses if the anticipation was successful.…”
Section: Design Principlesmentioning
confidence: 99%
See 1 more Smart Citation