2016 IEEE 23rd International Conference on High Performance Computing (HiPC) 2016
DOI: 10.1109/hipc.2016.023
|View full text |Cite
|
Sign up to set email alerts
|

Phoenix: Memory Speed HPC I/O with NVM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…We will explore pre-copying pages, based on write set estimation algorithms, into VAS page tables before the process restore. Our prior research has already demonstrated the practicality of such solutions for different workload classes [6,10]. In addition, information about hot pages could be stored in the snapshot, and could be used for page pre-copy.…”
Section: Summary and Future Workmentioning
confidence: 99%
“…We will explore pre-copying pages, based on write set estimation algorithms, into VAS page tables before the process restore. Our prior research has already demonstrated the practicality of such solutions for different workload classes [6,10]. In addition, information about hot pages could be stored in the snapshot, and could be used for page pre-copy.…”
Section: Summary and Future Workmentioning
confidence: 99%
“…At the same time their utilization is likely to suffer if they are placed on a compute blade, while rack scale disaggregation could provide better utilization. Another issue relates to the fact that most DCs would like to keep a copy of the data outside of the rack for resiliency purposes, but this can add significant overhead and remains a subject of research [103].…”
Section: Prototypementioning
confidence: 99%
“…We reconcile this by taking advantage of the combined aggregate, local and remote memory capacity and accompanying bandwidth. The resulting technique, described in detail in [13], splits the output, partially writes portion of it out to NVRAM, and temporarily writes out a portion to DRAM staging buffers. The fault-tolerance of the DRAM-resident data is achieved through other techniques, specifically replication.…”
Section: Design Componentsmentioning
confidence: 99%
“…The runtime remains responsible for dynamically assisting the overall resource availability and application demands in order to parametrize the execution of the technique, including the split ratio, or the details of realizing the persistence of the staging buffers, and it does this given target optimization metrics, such as gains in execution time, energy efficiency, or reliability guarantees. In addition, we integrate and further extend additional optimizations to reduce the overall data movement requirements, those taking place in the critical path of persistence operations, or both [13,21], and will further consider methods for integrating energy awareness leveraging our earlier work [22].…”
Section: Design Componentsmentioning
confidence: 99%