2007
DOI: 10.1007/s11241-007-9032-3
|View full text |Cite
|
Sign up to set email alerts
|

Timing predictability of cache replacement policies

Abstract: Abstract. Hard real-time systems must obey strict timing constraints. Therefore, one needs to derive guarantees on the worst-case execution times of a system's tasks. In this context, predictable behavior of system components is crucial for the derivation of tight and thus useful bounds. This paper presents results about the predictability of common cache replacement policies. To this end, we introduce three metrics, evict, f ill, and mls that capture aspects of cache-state predictability. A thorough analysis … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
94
0

Year Published

2010
2010
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 149 publications
(101 citation statements)
references
References 7 publications
2
94
0
Order By: Relevance
“…This means that known precise and efficient cache analyses [5] for LRU can be applied to Selfish-LRU during WCET analysis. LRU is generally considered to be the most predictable replacement policy in the non-preemptive scenario [6].…”
Section: Useful Properties Of Selfish-lrumentioning
confidence: 99%
See 1 more Smart Citation
“…This means that known precise and efficient cache analyses [5] for LRU can be applied to Selfish-LRU during WCET analysis. LRU is generally considered to be the most predictable replacement policy in the non-preemptive scenario [6].…”
Section: Useful Properties Of Selfish-lrumentioning
confidence: 99%
“…Work along these lines includes classifications of existing microarchitectures in terms of their predictability [17], [20], studies of the predictability of caches [6], and proposals of new microarchitectural techniques, such as novel multithreaded architectures that eliminate interference between threads [21], [22], [23], [24], [25] and DRAM controllers that allow multiple tasks to share DRAM devices in a predictable and composable fashion [26], [27]. In the following, we review work specifically concerning the interplay between multitasking and the memory hierarchy.…”
Section: Related Workmentioning
confidence: 99%
“…Reineke et al analyzed the predictability of different cache replacement policies [33]. It is shown that the LRU policy performs best with respect to predictability.…”
Section: Related Workmentioning
confidence: 99%
“…The resulting FIFO strategy can be used for larger caches. To offset the less predictable behavior of the FIFO replacement [33], the cache has to be larger than an LRU based cache.…”
Section: Heap Allocated Objectsmentioning
confidence: 99%
“…Likewise, Manjikian et al [16] demonstrated a 25% reduction in execution time as a result of modifying the source-code of the executing software to use cache partitioning. Many optimization techniques [21,17,27] increase cache hit rate by enhancing source code to increase temporal locality of data accesses, which defines the proximity with which shared data is accessed in terms of time [13]. For example, loop interchange and loop fusion techniques can increase temporal locality of accessed data by modifying application source code to change the order in which application data is written to and read from a processor cache [13,16].…”
Section: Introductionmentioning
confidence: 99%