2014
DOI: 10.1145/2597457.2597464
|View full text |Cite
|
Sign up to set email alerts
|

A performance-aware quality of service-driven scheduler for multicore processors

Abstract: In the latest decade, the IT industry shifted from single to multicore processors. Multicore processors require better support from operating systems and runtimes to allow applications to achieve predictable performance and guarantee quality of service (QoS). Finding a proper schedule to yield the specified performance for single and multi-threaded applications can be cumbersome; dealing with multi-programmed workloads may be even worse.We present a performance-aware QoS-driven scheduler for multicore processo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2014
2014
2014
2014

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(6 citation statements)
references
References 24 publications
0
6
0
Order By: Relevance
“…AutoPro requires each SLO-bound VM to make periodic performance reports available to its controller, in order to leverage its resource-performance models, as proposed in previous works [Zhang et al 2002;Padala et al 2009;Shen et al 2011;Sironi et al 2012;Bartolini et al 2013a;Hoffmann et al 2013;Sironi et al 2014]. Any performance metric meaningful to the user can be used for these reports and to express SLOs; for instance, a web server can use throughput (e.g., requests/s for a web server) or latency (i.e., response time).…”
Section: Performance Metrics and Measurementsmentioning
confidence: 99%
See 4 more Smart Citations
“…AutoPro requires each SLO-bound VM to make periodic performance reports available to its controller, in order to leverage its resource-performance models, as proposed in previous works [Zhang et al 2002;Padala et al 2009;Shen et al 2011;Sironi et al 2012;Bartolini et al 2013a;Hoffmann et al 2013;Sironi et al 2014]. Any performance metric meaningful to the user can be used for these reports and to express SLOs; for instance, a web server can use throughput (e.g., requests/s for a web server) or latency (i.e., response time).…”
Section: Performance Metrics and Measurementsmentioning
confidence: 99%
“…While these benchmarks were not designed to capture all the characteristics of typical cloud workloads, colocating PARSEC applications does create contention on compute bandwidth, thus stressing the problem that this article addresses. Since PARSEC applications do not natively report performance at runtime, we instrument a subset of the suite 5 to report throughput through our efficient user-space implementation of the Application Heartbeats API [Hoffmann et al 2010;Sironi et al 2012;Bartolini et al 2013a;Sironi et al 2014]. The hypervisor accesses VM performance measurements as in previous work [Padala et al 2009;Shen et al 2011].…”
Section: Performance Metrics and Measurementsmentioning
confidence: 99%
See 3 more Smart Citations