Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Syst 2019
DOI: 10.1145/3297858.3304024
|View full text |Cite
|
Sign up to set email alerts
|

Nimble Page Management for Tiered Memory Systems

Abstract: Software-controlled heterogeneous memory systems have the potential to increase the performance and cost efficiency of computing systems. However they can only deliver on this promise if supported by efficient page management policies and mechanisms within the operating system (OS). Current OS implementations do not support efficient tiering of data between heterogeneous memories. Instead, they rely on expensive offlining of memory or swapping data to disk as a means of profiling and migrating hot or cold data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
102
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 89 publications
(103 citation statements)
references
References 59 publications
1
102
0
Order By: Relevance
“…The memory bandwidth of the slow memory node is throttled and saturated to emulate the slow memory. AMP can improve the performance of selected applications by 23.8% compared to the LRU lists adopted in the prior work [13]. Furthermore, AMP can achieve 10.9, 6.4, 17.6% higher performance than LRU, LFU, and Random, respectively.…”
Section: Introductionmentioning
confidence: 94%
See 2 more Smart Citations
“…The memory bandwidth of the slow memory node is throttled and saturated to emulate the slow memory. AMP can improve the performance of selected applications by 23.8% compared to the LRU lists adopted in the prior work [13]. Furthermore, AMP can achieve 10.9, 6.4, 17.6% higher performance than LRU, LFU, and Random, respectively.…”
Section: Introductionmentioning
confidence: 94%
“…(i) Our investigation first shows that none of the policies excels the others for the range of applications, since each application has a different memory access behavior preferring a different migration policy. However, the page migration policies used by recent tiered memory studies are fixed to a single policy, such as a variant of LRU lists in Linux systems [13]. (ii) If a proper migration policy for each application is used, our investigation shows that huge pages can be effectively used for migration in tiered memory systems, reaping the benefit of efficient address translation.…”
Section: Introductionmentioning
confidence: 96%
See 1 more Smart Citation
“…These approaches use a software managed DRAM cache to mitigate the slow performance and block level read/write granularity of NVM SSDs. Operating system support for managing heterogeneous memory [2,45] and support for transparent unified memory between GPU and CPU [26,33] have been studied extensively in the past. However, to the best of our knowledge, the proposed work is the first to explore the design space and cost-performance tradeoffs of large scale DNN training on systems with DRAM and PMM.…”
Section: Related Workmentioning
confidence: 99%
“…Software approaches. Similarly, there are a number of approaches in the operating system [1,8,11,15,17,44,54,55,57,63] to optimize TLB shootdowns. Barrelfish [11], a research multi-kernel OS, uses message passing instead of IPIs to shoot down remote TLB entries.…”
Section: Related Workmentioning
confidence: 99%