2011 IEEE 19th Annual International Symposium on Modelling, Analysis, and Simulation of Computer and Telecommunication Systems 2011
DOI: 10.1109/mascots.2011.67
|View full text |Cite
|
Sign up to set email alerts
|

Estimating Application Cache Requirement for Provisioning Caches in Virtualized Systems

Abstract: Abstract-Miss rate curves (MRCs) are a fundamental concept in determining the impact of caches on an application's performance. In our research, we use MRCs to provision caches for applications in a consolidated environment. Current techniques for building MRCs at the CPU caches level require changes to the applications and are restricted to a few processor architectures [7], [22]. In this work, we investigate two techniques to partition shared L2 and L3 caches in a server and build MRCs for the VMs. These tec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 22 publications
(25 reference statements)
0
7
0
Order By: Relevance
“…Of particular interest is the cache, because current virtualization technologies do not ensure isolation of the cache usage of individual VMs accommodated by the same PM, leading to contention between them [58,100]. Thus, it is important to model and predict the performance interference that can be expected when co-locating a pair of VMs [56].…”
Section: Pm Characteristicsmentioning
confidence: 99%
See 1 more Smart Citation
“…Of particular interest is the cache, because current virtualization technologies do not ensure isolation of the cache usage of individual VMs accommodated by the same PM, leading to contention between them [58,100]. Thus, it is important to model and predict the performance interference that can be expected when co-locating a pair of VMs [56].…”
Section: Pm Characteristicsmentioning
confidence: 99%
“…A related approach, taken by several researchers, was to use a web application with real web traces: for example, RUBiS, a web application for online auctions [25], has been used my multiple researchers with various web server traces [51,95]; Wikipedia traces were also used [37]. Other benchmark applications used include the NAS Parallel Benchmarks (http://www.nas.nasa.gov/publications/ npb.html) [58,73,96,103], the BLAS linear algebra package (http://www.netlib.org/blas/) [103] and the related Linpack benchmark (http://netlib.org/benchmark/hpl/) [31,99].…”
Section: Empirical Evaluationmentioning
confidence: 99%
“…Here, using Big Data analytics is possible to process data from multiple sources and on-the-fly extract relevant knowledge to drive the strategy or business of either enterprises or other organizations. Consequently to obtain a good performance in the supporting infrastructure for processing big quantities of data, such as low latency and high throughput, some management enhancements are needed to operate more intelligently the available computing resources in each data center, such as virtual machines (Dai et al, 2013), memory (Zhou & Li, 2013), CPU scheduling (Bae et al, 2012), and cache (Koller et al, 2011), I/O (Ram et al, 2013). Other very important functional aspect to be aware in data centers is to enhance the network performance (Marx, 2013;Lange et al, 2011;Saleem, Hassan, & Asirvadam, 2011).…”
Section: Data Centersmentioning
confidence: 99%
“…They may use it to fairly partition shared caches [10,22,26], optimize routing or cache coherency protocol in their network [17,19,23,24] or they can provide quality of service (QoS) in memory accesses [21,30]. These are just simple ideas but one could go beyond them to design a virtualization-capable processor.…”
Section: Introductionmentioning
confidence: 99%