2017 IEEE 24th International Conference on High Performance Computing (HiPC) 2017
DOI: 10.1109/hipc.2017.00047
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Common Neighborhoods to Optimize MPI Neighborhood Collectives

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 17 publications
0
4
0
Order By: Relevance
“…Furthermore, we conduct experiments between traces with randomized placement (as has been the case thus far) and a "linear" placement where task i is placed on network endpoint i. This places neighboring tasks in neighboring network endpoints with no fragmentation and preserves all placement optimizations and communication locality that the application contains [30,65]. The purpose is to clearly show how bandwidth steering can reconstruct locality in an execution where locality has been lost due to unfavorable placement and fragmentation on the system.…”
Section: Link Utilizationmentioning
confidence: 99%
“…Furthermore, we conduct experiments between traces with randomized placement (as has been the case thus far) and a "linear" placement where task i is placed on network endpoint i. This places neighboring tasks in neighboring network endpoints with no fragmentation and preserves all placement optimizations and communication locality that the application contains [30,65]. The purpose is to clearly show how bandwidth steering can reconstruct locality in an execution where locality has been lost due to unfavorable placement and fragmentation on the system.…”
Section: Link Utilizationmentioning
confidence: 99%
“…Somewhat related to our work are the sparse neighborhood collectives [11][12][13]15] in MPI, with which one can define a restricted set of neighbor processes and perform collectives on them. In our methodology, the neighboring processes in distinct communication stages may or may not communicate with each other depending on what submessages they forward.…”
Section: Related Workmentioning
confidence: 99%
“…The standard communicators do not cover such an algorithm. One would need to implement it for a “graph communicator” as “neighbourhood collective communication.” The communicator creating routine would need to detect the multiple all‐to‐all communication pattern . This is in general computationally expensive.…”
Section: Implications For Communication Librariesmentioning
confidence: 99%
“…20 The communicator creating routine would need to detect the multiple all-to-all communication pattern. 21 This is in general computationally expensive. One solution could be the use of the MPI_Info object for the creation of the communicator in order to pass the details about the collectives to the library.…”
Section: Implications For Communication Librariesmentioning
confidence: 99%