2011
DOI: 10.1007/978-3-642-24449-0_45
|View full text |Cite
|
Sign up to set email alerts
|

Writing Parallel Libraries with MPI - Common Practice, Issues, and Extensions

Abstract: Abstract. Modular programming is an important software design concept. We discuss principles for programming parallel libraries, show several successful library implementations, and introduce a taxonomy for existing parallel libraries. We derive common requirements that parallel libraries pose on the programming framework. We then show how those requirements are supported in the Message Passing Interface (MPI) standard. We also note several potential pitfalls for library implementers using MPI. Finally, we con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2017
2017

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 19 publications
(25 reference statements)
0
3
0
Order By: Relevance
“…In the contrary, we show that the full potential of partitioning and advanced topology mapping can be provided "under the hood". Our library follows the guidelines for good MPI library design [36] and completely hides all communication and data-distribution functions from the user. Thus, it enables highest performance portability across a wide variety of architectures and arbitrary network topologies.…”
Section: Discussionmentioning
confidence: 99%
“…In the contrary, we show that the full potential of partitioning and advanced topology mapping can be provided "under the hood". Our library follows the guidelines for good MPI library design [36] and completely hides all communication and data-distribution functions from the user. Thus, it enables highest performance portability across a wide variety of architectures and arbitrary network topologies.…”
Section: Discussionmentioning
confidence: 99%
“…MPI works on the principle that nothing is shared between processes unless it is explicitly transported by the programmer. These semantics simplify reasoning about the program's state (Hoefler & Snir, 2011) and avoid complex problems that are often encountered in shared-memory programming models (Lee, 2006) where automatic memory synchronization becomes a significant bottleneck.…”
Section: Related Workmentioning
confidence: 99%
“…Its shared nothing semantics and the SPMD programming simplify reasoning about the program's state and avoid complex problems that are often encountered in shared memory programming models [10]. Composition is achieved through communication contexts (called communicators in MPI) that enable multiple parallel libraries or objects to be combined into a single program without interference [8]. Those features have made MPI the predominant programming model for parallel scientific applications.…”
Section: Introductionmentioning
confidence: 99%