Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE), 2017 2017
DOI: 10.23919/date.2017.7927151
|View full text |Cite
|
Sign up to set email alerts
|

MALRU: Miss-penalty aware LRU-based cache replacement for hybrid memory systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…Some cache techniques [18]- [20] were suggested earlier for improving traditional average memory access time for multi-level cache systems. In [18], hardware prefetching was considered to exploit spatial and temporal locality of references.…”
Section: Concurrent Average Memory Access Time (C-amat)mentioning
confidence: 99%
See 1 more Smart Citation
“…Some cache techniques [18]- [20] were suggested earlier for improving traditional average memory access time for multi-level cache systems. In [18], hardware prefetching was considered to exploit spatial and temporal locality of references.…”
Section: Concurrent Average Memory Access Time (C-amat)mentioning
confidence: 99%
“…In [19], multi-level caches were considered as primary and secondary memories for proxy servers to access web content. In [20], an LRU replacement policy was proposed that makes use of the awareness of the cache miss-penalty to ensure memory access latency is balanced for memory system built with different memory technologies termed as "hybrid" system. The work addressed in [18]- [20] were specific cache techniques attempted to reduce average memory access time without considering any cost implications.…”
Section: Concurrent Average Memory Access Time (C-amat)mentioning
confidence: 99%
“…To minimize energy consumption, the authors have proposed hybrid cache architecture composed of non-volatile memory (NVM) and DRAM. The MALRU (Misslatency Aware LRU) [28] cache replacement algorithm tries to retain NVM block (high latency) in memory and preferentially selects victim from the DRAM block (low latency). Simultaneously MALRU keep on updating the reserve section of DRAM blocks to improve the performance.…”
Section: Cost-based Replacement Algorithmsmentioning
confidence: 99%
“…Some studies have proposed techniques for reducing the number of LLC writebacks to the nonvolatile component of a hybrid main memory consisting of NVM and DRAM [6,35]. In Reference [6], a miss penalty-aware LRU-based cache replacement policy, called MALRU is proposed to consider the asymmetry of cache miss penalty on DRAM and NVM. MALRU keeps the high-latency NVM blocks as well as the low-latency DRAM blocks with good temporal locality in a reserved area to protect them from being evicted.…”
Section: Related Workmentioning
confidence: 99%