2015 IEEE 21st International Conference on Parallel and Distributed Systems (ICPADS) 2015
DOI: 10.1109/icpads.2015.92
|View full text |Cite
|
Sign up to set email alerts
|

Hardware-Centric Analysis of Network Performance for MPI Applications

Abstract: As the scale of high-performance computing systems increases, optimizing inter-process communication becomes more challenging while being critical for ensuring good performance. However, the hardware layer abstraction provided by MPI makes it difficult to study application communication performance over the network hardware, especially for collective operations. We present a new approach to network performance analysis based on exposing low-level communication metrics in a flexible manner and conducting hardwa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 23 publications
(21 reference statements)
0
2
0
Order By: Relevance
“…Brown et al (2015) proposed a visualizing system of message traffics in a communication network and they succeeded to identify hot spots. In their case study using the samplesort program running on TSUBAME 2.5, 5% performance gain was obtained by avoiding the hot spots which they discovered by using their tool.…”
Section: Related Workmentioning
confidence: 99%
“…Brown et al (2015) proposed a visualizing system of message traffics in a communication network and they succeeded to identify hot spots. In their case study using the samplesort program running on TSUBAME 2.5, 5% performance gain was obtained by avoiding the hot spots which they discovered by using their tool.…”
Section: Related Workmentioning
confidence: 99%
“…PERUSE was an international effort to design a callback interface to collect internal information from MPI implementations. PERUSE was implemented in Open MPI [16] and used by selected projects in the MPI community [3,4,17].…”
Section: Perusementioning
confidence: 99%