SC16: International Conference for High Performance Computing, Networking, Storage and Analysis 2016
DOI: 10.1109/sc.2016.32
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Performance Interference in Next-Generation HPC Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…Martinasso et al [36] develop a congestion-aware performance model for PCIe communication to study the impact of PCIe topology. Mondragon et al use both simulation and modeling technique to profile next-generation interference sources and performance of the HPC benchmarks [12]. Yang et al [37] performance modeling the applications by running kernels on the target platform and then conduct the prediction cross-platform based on relative performance between the target platforms.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Martinasso et al [36] develop a congestion-aware performance model for PCIe communication to study the impact of PCIe topology. Mondragon et al use both simulation and modeling technique to profile next-generation interference sources and performance of the HPC benchmarks [12]. Yang et al [37] performance modeling the applications by running kernels on the target platform and then conduct the prediction cross-platform based on relative performance between the target platforms.…”
Section: Related Workmentioning
confidence: 99%
“…Statistical model [12,13] tries to overcome the disadvantages by predicting the scaling performance according to a large number of sampled application runs without digging into the source codes. However, it introduces a large number of application runs to train the performance model for achieving satisfying accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…In CC environments, sharing computational resources such as memory and processors creates performance impacts when workloads are run simultaneously. The proper allocation of resources can mitigate these impacts [17].…”
Section: Tier 1: Node Allocation In the Cloud Using Genetic Programmingmentioning
confidence: 99%
“…Here, a sensitive application is one in which an MPI synchronizing collective call (e.g., MPI_Barrier, MPI_Allreduce, MPI_Allgather, and so on) is called at least once per iteration in an iterative simulation or algorithm. Previous works() have shown that interference sources have potentially greater impact on applications with fine‐grained parallelism (i.e., applications with shorter per‐iteration intervals). For example, in Seelam et al, authors show that jitter can generate slowdowns as high as 8% for applications with computation intervals of 100 ms and over 16% of slowdowns for applications with 10‐ms computation intervals at 32 K CPUs.…”
Section: Importance Of Time Agreementmentioning
confidence: 99%