2019 IEEE/ACM Workshop on Memory Centric High Performance Computing (MCHPC) 2019
DOI: 10.1109/mchpc49590.2019.00016
|View full text |Cite
|
Sign up to set email alerts
|

Machine Learning Guided Optimal Use of GPU Unified Memory

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 10 publications
0
4
0
Order By: Relevance
“…In this paper, we pick a dataset and ML model generated by the XPlacer [17] to study how to FAIRify HPC datasets and ML models. The reason to choose XPlacer is that the authors released raw data with detailed documents explaining how data was generated and processed.…”
Section: Xplacer Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, we pick a dataset and ML model generated by the XPlacer [17] to study how to FAIRify HPC datasets and ML models. The reason to choose XPlacer is that the authors released raw data with detailed documents explaining how data was generated and processed.…”
Section: Xplacer Datasetsmentioning
confidence: 99%
“…Research activities in high-performance computing (HPC) community have applied machine learning (ML) for various research needs such as performance modeling and prediction [13], memory optimization [17,18], and so on. A typical ML-enabled HPC study generates a large amount of valuable datasets from the HPC experiment outputs.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The authors in [62] proposed a novel machine learning approach to find the optimal choice of GPU memory requirements for CUDA applications. The workflow of the proposed approach has two phases: (1) Offline learning; and (2) Online inference.…”
Section: Architecture/platform/framework and Strategymentioning
confidence: 99%
“…Rather than doing shifting and allocating the memory to the host and device allocate a special pointer that can be used by both CPU and GPU, this is the concept of unified memory allocation [9]. According to recent advancements in unified memory employment, a huge extent of features has been added like page fault handling for GPUs, transferring of data when requested, extra memory allotment for GPUs, and counters for accessing the data [10]. In the past, two distinct AutoSwap and SmartPool strategies have been applied to minimize GPU consumption and it prevents any human intervention [11].…”
mentioning
confidence: 99%