2017
DOI: 10.1109/tpds.2016.2611572
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Symbiosis and Fair Scheduling in Shared Cache

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3

Relationship

3
3

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 34 publications
0
4
0
Order By: Relevance
“…Cache partitioning has been studied to allocate caches between multiple processes to minimize the miss rate and maximize throughput [56], [57], to guarantee the fairness between applications [58], [59], [56], [60], [61], [62], [63], and to protect the latency-sensitive jobs from batch jobs [64], [65], [66], [67]. Constructing miss rate curves or utility curves can give users a hint to allocate fast memory between multiple workloads [57], [68], [69], [70], [71], [72]. If users already know the utility curves of workloads, an auction can be used to allocate fast memory between workloads [73], [74].…”
Section: Discussionmentioning
confidence: 99%
“…Cache partitioning has been studied to allocate caches between multiple processes to minimize the miss rate and maximize throughput [56], [57], to guarantee the fairness between applications [58], [59], [56], [60], [61], [62], [63], and to protect the latency-sensitive jobs from batch jobs [64], [65], [66], [67]. Constructing miss rate curves or utility curves can give users a hint to allocate fast memory between multiple workloads [57], [68], [69], [70], [71], [72]. If users already know the utility curves of workloads, an auction can be used to allocate fast memory between workloads [73], [74].…”
Section: Discussionmentioning
confidence: 99%
“…They may be categorized by sample selection and mechanism. Samples may be selected by addresses [5], accesses [20,26], or windows [25,46]. The sampling may be done using compiler support [5] or binary instrumentation [25,46].…”
Section: Related Workmentioning
confidence: 99%
“…Samples may be selected by addresses [5], accesses [20,26], or windows [25,46]. The sampling may be done using compiler support [5] or binary instrumentation [25,46]. Trace analysis precisely identifies reuses at block granularity, but it requires a program input to run, and its cost is proportional to the trace length.…”
Section: Related Workmentioning
confidence: 99%
“…Contention on shared resources such as the last level cache (LLC) and memory bandwidth may cause serious performance degradation and unfairness, which makes efficient source allocation a critical issue in data centers. A number of methods have been proposed to address LLC contention in real systems in order to improve system throughput [8,36,37] or mitigate unfairness [8,14,29,37]. In the recent decade, memory bandwidth scheduling has been extensively studied using simulation [7, 12, 13, 15-17, 21, 24, 25, 28, 43].…”
Section: Introductionmentioning
confidence: 99%