2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) 2017
DOI: 10.1109/ipdpsw.2017.115
|View full text |Cite
|
Sign up to set email alerts
|

Exploring the Performance Benefit of Hybrid Memory System on HPC Environments

Abstract: Abstract-Hardware accelerators have become a de-facto standard to achieve high performance on current supercomputers and there are indications that this trend will increase in the future. Modern accelerators feature high-bandwidth memory next to the computing cores. For example, the Intel Knights Landing (KNL) processor is equipped with 16 GB of high-bandwidth memory (HBM) that works together with conventional DRAM memory. Theoretically, HBM can provide ∼ 4× higher bandwidth than conventional DRAM. However, ma… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
17
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 12 publications
1
17
0
Order By: Relevance
“…Furthermore, the performance of Stable-NUMA did not improve significantly over Linux at graph500 and CG workload execution on every configuration. This was because the memory access pattern of graph500 has the characteristics of poor temporal and spatial locality [62]. Actually, by using the Linux perf [63] tool, we measured that the local memory access ratio at Server A with four NUMA nodes is nearly 25%.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, the performance of Stable-NUMA did not improve significantly over Linux at graph500 and CG workload execution on every configuration. This was because the memory access pattern of graph500 has the characteristics of poor temporal and spatial locality [62]. Actually, by using the Linux perf [63] tool, we measured that the local memory access ratio at Server A with four NUMA nodes is nearly 25%.…”
Section: Discussionmentioning
confidence: 99%
“…al. [18]. In contrast, for PageRank the difference between the two memory technologies becomes apparent at 64 threads, and the performance gap is almost 2x for 256 threads.…”
Section: A Mcdram Versus Dram On the Knlmentioning
confidence: 94%
“…An important trend is the addition of wider buses and asynchronous protocols for data movement in the form of NVMe and also the support for highbandwidth memory (HBM) [74]. HBM requires architecture changes, which are not backwards compatible with older hardware, but will bring significant benefit to bandwidth-bound applications [73]. With 3D NAND, the capacity of SSDs will improve further, which might eventually replace HDDs in data centers [41].…”
Section: Technologiesmentioning
confidence: 99%