2017
DOI: 10.1109/tpds.2017.2713778
|View full text |Cite
|
Sign up to set email alerts
|

A Hardware Approach to Fairly Balance the Inter-Thread Interference in Shared Caches

Abstract: Shared caches have become the common design choice in the vast majority of modern multi-core and many-core processors, since cache sharing improves throughput for a given silicon area. Sharing the cache, however, has a downside: the requests from multiple applications compete among them for cache resources, so the execution time of each application increases over isolated execution. The degree in which the performance of each application is affected by the interference becomes unpredictable yielding the system… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 46 publications
(48 reference statements)
0
5
0
Order By: Relevance
“…Selfa et al [95] introduced a hardware-based cache partitioning approach that reduces shared cache interference by assigning a private cache partition to each application. The size of each cache partition is dynamically adjusted at runtime according to the requirements of each application during its execution.…”
Section: ) Proposal Focused On Achieving a Better Performancementioning
confidence: 99%
“…Selfa et al [95] introduced a hardware-based cache partitioning approach that reduces shared cache interference by assigning a private cache partition to each application. The size of each cache partition is dynamically adjusted at runtime according to the requirements of each application during its execution.…”
Section: ) Proposal Focused On Achieving a Better Performancementioning
confidence: 99%
“…If cache will be shared the downside will appear which the competition among the requests from various applications for cache resources, therefore over isolated execution, the execution time will be increased of each application. Fair-Progress Cache Partitioning (FPCP) has been suggested by (Vicent Selfa, Sahuquillo, Petit, & Gómez, 2017), this approach is a low-overhead cache partitioning hardware-based which can identify system fairness. By allocating; for all applications; a cache partition, the interference can be decreased with FPCP and modify the partition sizes at runtime.…”
Section: Memorymentioning
confidence: 99%
“…Table 1 provides an overview of the different systems explained in section II for measuring and even enhancing the operating system performance. As illustrated in the table there is a number of novel approaches been introduced as in (Li et al, 2017), (Saez et al, 2017), (Seo et al, 2018), (V Selfa, Sahuquillo, Petit, & Gómez, 2017) and (Lu et al, 2017) that cover many metrics. The most important approach is Argobots that proposed by (Seo et al, 2018) and includes several features; low-level threading, low-level tasking, and a lightweight framework.…”
Section: Memorymentioning
confidence: 99%
“…The idea behind cache partitioning is to retain the benefits of a monolithic and shared cache and, at the same time, avoiding the disadvantages of cache interference. By mitigating cache interference, a well-crafted cache partitioning policy can improve performance [87], fairness [78], and isolate applications for security [96] and quality of service (QoS) reasons [8]. Cache partitioning allows system software to divide the cache space between cores, threads or applications.…”
Section: Cache Partitioningmentioning
confidence: 99%
“…Cache partitioning allows system software to divide the cache space between cores, threads or applications. There are different ways to partition the cache, doing set-partitioning from software with page coloring [101] way partition-ing from hardware [78,70] or using probabilistic cache replacement mechanisms [58].…”
Section: Cache Partitioningmentioning
confidence: 99%