Proceedings of the 2003 ACM/IEEE Conference on Supercomputing 2003
DOI: 10.1145/1048935.1050208
|View full text |Cite
|
Sign up to set email alerts
|

Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics

Abstract: In this paper, we present a comprehensive performance comparison of MPI implementations over InfiniBand, Myrinet and Quadrics. Our performance evaluation consists of two major parts. The first part consists of a set of MPI level micro-benchmarks that characterize different aspects of MPI implementations. The second part of the performance evaluation consists of application level benchmarks. We have used the NAS Parallel Benchmarks and the sweep3D benchmark. We not only present the overall performance results, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
77
0

Year Published

2005
2005
2020
2020

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 95 publications
(80 citation statements)
references
References 21 publications
2
77
0
Order By: Relevance
“…The use of RDMA over InfiniBand for MPI communication has been widely accepted as the preferred method for remote data copy operations due to several advantages previously studied [4]. The initiating task performs a copy of the remote buffer into the local buffer.…”
Section: Related Workmentioning
confidence: 99%
“…The use of RDMA over InfiniBand for MPI communication has been widely accepted as the preferred method for remote data copy operations due to several advantages previously studied [4]. The initiating task performs a copy of the remote buffer into the local buffer.…”
Section: Related Workmentioning
confidence: 99%
“…In order to evaluate the impact of threads on network latency, we use the multi-threaded latency test that is included in the OSU Micro Benchmark suite [14]. This benchmark performs ping-pong test with a single sender and multiple receiver threads.…”
Section: B Impact Of Threads On Latencymentioning
confidence: 99%
“…Related Work Previous work has been done on understanding the communication and non-communication overheads (in the context of MPI) on various architectures [13], [14], [15], [16], [17], [18]. However, none of this work looks at the network saturation behavior that is becoming increasingly important with system size, which is the focus of this paper.…”
Section: Nearest-neighbor Communicationmentioning
confidence: 99%