Proceedings of the 2007 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems 2007
DOI: 10.1145/1254882.1254886
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: As we enter the era of CMP platforms with multiple threads/cores on the die, the diversity of the simultaneous workloads running on them is expected to increase. The rapid deployment of virtualization as a means to consolidate workloads on to a single platform is a prime example of this trend. In such scenarios, the quality of service (QoS) that each individual workload gets from the platform can widely vary depending on the behavior of the simultaneously running workloads. While the number of cores assigned t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
124
0

Year Published

2009
2009
2015
2015

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 189 publications
(124 citation statements)
references
References 26 publications
0
124
0
Order By: Relevance
“…In our view, in CMPs, we have to maintain the same principle that rules today in SMP and uniprocessor systems: Let's assume that if a process X runs for a period of time in a CMP, T R The main issue to address is how to determine dynamically (while a process X is simultaneously running with other processes) at the end of each context switch, the time (or IPC) it would take X to execute the same instructions if it had been alone in the system. An intuitive solution is to provide hardware mechanisms to determine the IPC in isolation of each process in a workload by periodically running each process in isolation [3], [6]. However, as the number of processes simultaneously executing in a multithreaded processor increases to dozens or even hundreds, this solution will not scale, as the number of isolation phases increases linearly with the number of processes in the workload.…”
Section: Formalizing the Problemmentioning
confidence: 99%
“…In our view, in CMPs, we have to maintain the same principle that rules today in SMP and uniprocessor systems: Let's assume that if a process X runs for a period of time in a CMP, T R The main issue to address is how to determine dynamically (while a process X is simultaneously running with other processes) at the end of each context switch, the time (or IPC) it would take X to execute the same instructions if it had been alone in the system. An intuitive solution is to provide hardware mechanisms to determine the IPC in isolation of each process in a workload by periodically running each process in isolation [3], [6]. However, as the number of processes simultaneously executing in a multithreaded processor increases to dozens or even hundreds, this solution will not scale, as the number of isolation phases increases linearly with the number of processes in the workload.…”
Section: Formalizing the Problemmentioning
confidence: 99%
“…Consequently, a few researchers have investigated how a chipwide resource management technique can be designed. Iyer et al [11] proposed a high-level framework for implementing a QoS-aware memory system, while Nesbit et al [12] proposed the Virtual Private Machines framework where a private virtual machine is created by dividing the available physical resources among applications. In addition, Bitirgen et al [10] showed how machine learning can be applied to the resource allocation problem.…”
Section: Related Workmentioning
confidence: 99%
“…Previously, cache capacity interference has received a great deal of attention [1,[3][4][5][6][7] while only a few researchers have proposed techniques that reduce memory bus interference [2,8,9]. Furthermore, there has been little interest in the details of designing a complete, thread-aware memory system [10][11][12]. A first step towards a unified approach to reducing interference in the hardware-managed memory system is to develop an understanding of the problem.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, platform support that enforces different QoS priorities has been proposed [16,21,12,18]. Among these studies, a promising direction for QoS management is hardware execution throttling.…”
Section: Novel Hardware Solutions To Mitigate Contentionmentioning
confidence: 99%
“…Herdirch et al [18] use what is likely to be future hardware capabilities, core specific dynamic voltage scaling and clock modulation to throttle down low priority applications to reduce their performance interference on the high priority applications. Ebrahimi et al [12] and Iyer et al [21] propose hardware changes to throttle memory requests to provide QoS management. Although these hardware solutions have shown promising results using simulations, they require significant changes to current commodity micro-architectures and cannot be applied to multicore platforms that are already in production or to be deployed in the near future.…”
Section: Novel Hardware Solutions To Mitigate Contentionmentioning
confidence: 99%