Proceedings Title: Proceedings of the 2012 Winter Simulation Conference (WSC) 2012
DOI: 10.1109/wsc.2012.6465036
|View full text |Cite
|
Sign up to set email alerts
|

Hardware-in-the-loop simulation for automated benchmarking of cloud infrastructures

Abstract: To address the challenge of automated performance benchmarking in virtualized cloud infrastructures, an extensible and adaptable framework called CloudBench has been developed to conduct scalable, controllable, and repeatable experiments in such environments. This paper presents the hardware-in-the-loop simulation technique used in CloudBench, which integrates an efficient discrete-event simulation with the cloud infrastructure under test in a closed feedback control loop. The technique supports the decomposit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 29 publications
(28 reference statements)
0
6
0
Order By: Relevance
“…Analyzing the results of this systematic survey we can conclude that performance is the main objective for testing cloud-based systems. This happens 20 times in the list of selected papers [32], [97], [167], [2], [70], [124], [55], [118], [117], [158], [87], [127], [34], [7], [96], [6], [74], [65], [95], [67]. Some performance indicators assessed in these studies include response time, average latency, or execution time, to name a few.…”
Section: Rq1mentioning
confidence: 99%
See 1 more Smart Citation
“…Analyzing the results of this systematic survey we can conclude that performance is the main objective for testing cloud-based systems. This happens 20 times in the list of selected papers [32], [97], [167], [2], [70], [124], [55], [118], [117], [158], [87], [127], [34], [7], [96], [6], [74], [65], [95], [67]. Some performance indicators assessed in these studies include response time, average latency, or execution time, to name a few.…”
Section: Rq1mentioning
confidence: 99%
“…For instance, paper [104] presents the comparison of costs and footprint among different cloud configurations. In the same way, paper [95] studies the impact on testing costs of sequential vs. parallel execution on cloud infrastructures.…”
Section: Rq4mentioning
confidence: 99%
“…In symbiotic simulation paradigm, the simulation model benefits from the continuous supply of the latest data and the automatic validation of its simulation outputs, whereas the physical system benefits from the improved performance obtained from the analysis of simulation experiments. In this context, the proposals [7], [8] have inspired us the most and closely related to our research direction. More specifically, what we are trying to bring about is the advanced practical manifestation of the very basic idea envisioned in [7] by Qi Liu et al Primarily in this work, the authors put forward a multi-agent based symbiotic scheme for autonomic cloud management purposes.…”
Section: Comparison To Other Approachesmentioning
confidence: 99%
“…However, this work lacks the actualization or implementation details of the idea. In addition to this, [8] presents the hardware-in-the-loop simulation technique used in cloud benchmarking tool, which integrates an efficient discreteevent simulation with the cloud infrastructure under test in a closed feedback control loop. The experiments demonstrated that the technique can synthesize complex resource usage patterns for effective cloud performance benchmarking.…”
Section: Comparison To Other Approachesmentioning
confidence: 99%
“…It can simulate various user activities through a number of easy-to-use configurables, including application load behavior, application arrival time and life time, VM capture activities, fail & repair activities and the logical sequence or dependency between activities. The hardware-in-the-loop simulation design [10] enables it to conduct scalable, controllable and repeatable experiments. CloudBench collects two types of metrics -management and runtime.…”
Section: Related Workmentioning
confidence: 99%