2013
DOI: 10.1109/tsc.2011.36
|View full text |Cite
|
Sign up to set email alerts
|

Performance Analysis of Network I/O Workloads in Virtualized Data Centers

Abstract: Abstract-Server consolidation and application consolidation through virtualization are key performance optimizations in cloud-based service delivery industry. In this paper, we argue that it is important for both cloud consumers and cloud providers to understand the various factors that may have significant impact on the performance of applications running in a virtualized cloud. This paper presents an extensive performance study of network I/O workloads in a virtualized cloud environment. We first show that c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0
8

Year Published

2014
2014
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(40 citation statements)
references
References 29 publications
0
28
0
8
Order By: Relevance
“…Barham et al [18] pointed out that 30-40% of execution time for a network to transmit or receive an operation was spent in VMM to remap addresses contained in the transmitted data package. It has been demonstrated that the overhead of central processing units (CPUs) and latency increase with the transmitted package rate due to increased communication between the server (VMM) and client (VMs) domains [136].…”
Section: Bandwidth Speed Distributed Storage and Computationmentioning
confidence: 99%
“…Barham et al [18] pointed out that 30-40% of execution time for a network to transmit or receive an operation was spent in VMM to remap addresses contained in the transmitted data package. It has been demonstrated that the overhead of central processing units (CPUs) and latency increase with the transmitted package rate due to increased communication between the server (VMM) and client (VMs) domains [136].…”
Section: Bandwidth Speed Distributed Storage and Computationmentioning
confidence: 99%
“…Mars et al [24] propose that one way to measure the QoS of web servers is to measure the server throughput at different workload rates, namely, the maximum number of successful queries per second, as is the case in Google's web search. According to Mei et al [26], the performance of web servers is CPU bound under a mix of small files and is network bound under a mix of large files. This work considers the second case by assuming that web workloads are mainly characterized by network resource consumption (i.e., are network-intensive) and have residual CPU consumption.…”
Section: Network-bound Workloadsmentioning
confidence: 99%
“…This work considers the second case by assuming that web workloads are mainly characterized by network resource consumption (i.e., are network-intensive) and have residual CPU consumption. The CPU time consumed to process network requests can be divided into two major categories: the time spent establishing TCP connections and the time spent transporting web file content [26]. Because the demand for resources changes abruptly with time, the amount of transferred network I/O bðsÞ served by a task s at each instant is given by Eq.…”
Section: Network-bound Workloadsmentioning
confidence: 99%
“…The delay center is used to model network and/or protocol delays introduced in establishing connections, etc. Performance parameters need to be continuously updated at runtime (see [12] for further details) in order to capture transient behavior of VMs network and I/O interference [13], and performance variability of the Cloud provider over time [14]. This simplified performance model is used since alternative runtime adaptation decisions need to be evaluated very quickly according to a 5-10 minutes time scale.…”
Section: Space4cloudmentioning
confidence: 99%