2020
DOI: 10.1049/iet-cdt.2019.0092
|View full text |Cite
|
Sign up to set email alerts
|

Survey on memory management techniques in heterogeneous computing systems

Abstract: A major issue faced by data scientists today is how to scale up their processing infrastructure to meet the challenge of big data and high-performance computing (HPC) workloads. With today's HPC domain, it is required to connect multiple graphics processing units (GPUs) to accomplish large-scale parallel computing along with CPUs. Data movement between the processor and on-chip or off-chip memory creates a major bottleneck in overall system performance. The CPU/GPU processes all the data on a computer's memory… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(2 citation statements)
references
References 82 publications
0
2
0
Order By: Relevance
“…Different multithreading libraries have different features, advantages, and disadvantages. Therefore, choosing an appropriate multithreading library is crucial to high performance [3].…”
Section: Introductionmentioning
confidence: 99%
“…Different multithreading libraries have different features, advantages, and disadvantages. Therefore, choosing an appropriate multithreading library is crucial to high performance [3].…”
Section: Introductionmentioning
confidence: 99%
“…Memory latency is becoming an overwhelming bottleneck in computer performance due to the "memory wall" [4,72] problem, especially with the advent of GPUs [43], TPUs [28], and heterogeneous architectures [17,52] that accelerate computation. Prefetching is critical in reducing program execution time and improving instructions per cycle (IPC) by hiding the latency.…”
Section: Introductionmentioning
confidence: 99%