Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2008
DOI: 10.1145/1345206.1345228
|View full text |Cite
|
Sign up to set email alerts
|

Design and implementation of a high-performance MPI for C# and the common language infrastructure

Abstract: As high-performance computing enters the mainstream, parallel programming mechanisms (including the Message Passing Interface, or MPI) must be supported in new environments such as C# and the Common Language Infrastructure (CLI). Making effective use of MPI with the CLI requires an interface that reflects the highlevel object-oriented nature of C# and that also supports its programming idioms. However, for performance reasons, this highlevel functionality must ultimately be mapped to low-level native MPI libra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2009
2009
2014
2014

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 7 publications
0
5
0
Order By: Relevance
“…The Parallel Dwarf project [7], provides an implementation of these programs in two languages: C# and C++. The C# implementation has a sequential implementation, one based on shared memory via the TPL, and one based on message passing via MPI (message passing abstractions) [8]. Likewise, the C++ implementation has a sequential and shared memory implementation using OpenMP.…”
Section: How Do Current Language Abstractions Perform?mentioning
confidence: 99%
“…The Parallel Dwarf project [7], provides an implementation of these programs in two languages: C# and C++. The C# implementation has a sequential implementation, one based on shared memory via the TPL, and one based on message passing via MPI (message passing abstractions) [8]. Likewise, the C++ implementation has a sequential and shared memory implementation using OpenMP.…”
Section: How Do Current Language Abstractions Perform?mentioning
confidence: 99%
“…Programming Model Libraries offer a different (often limited) programming model such as the master/slave model (e.g., ADLB [15]) or fine-grained objects (e.g., AP [24]). System and Utility Libraries offer helper functionality to interface different architectural subsystems that are often outside the scope of MPI (e.g., LibTopoMap [10], HDF5 [3]) or language bindings (e.g., Boost.MPI, C# [5]). …”
Section: A Taxonomy For Parallel Librariesmentioning
confidence: 99%
“…This enables the reception of dynamically-sized messages. However, this also creates problems in the context of multiple threads [5] since one thread can query the message and another thread can receive it (the queue is a global shared object). A matched probe call that removes the message from the queue while peeking has been proposed to MPI-3 to solve this problem [7].…”
Section: Thread-safe Message Probingmentioning
confidence: 99%
“…Similarly, an implementation of language bindings such as MPI.NET [6] must assume the general case (scenario 7) so using the fine grained locking approach likely entails a high cost.…”
Section: A Fine-grained Locking Mechanismmentioning
confidence: 99%