2010
DOI: 10.1155/2010/715637
|View full text |Cite
|
Sign up to set email alerts
|

A Programming Model Performance Study Using the NAS Parallel Benchmarks

Abstract: Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
10
0

Year Published

2012
2012
2023
2023

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 2 publications
3
10
0
Order By: Relevance
“…Our conclusions about UPC are compatible with results obtained in previous studies, in particular [8], [9]. We confirm that UPC can compete with both MPI and OpenMP in performance on a single-node.…”
Section: Discussionsupporting
confidence: 91%
See 1 more Smart Citation
“…Our conclusions about UPC are compatible with results obtained in previous studies, in particular [8], [9]. We confirm that UPC can compete with both MPI and OpenMP in performance on a single-node.…”
Section: Discussionsupporting
confidence: 91%
“…Even though there is no global winner in the obtained single-node measurements, UPC is able to compete with both As described in [9], on a single-node platform, UPC scales well over more CPU cores and competes well with OpenMP and MPI. However, the performance of MPI or OpenMP is better in many cases.…”
Section: B Measurements On Single-node Architecturementioning
confidence: 93%
“…A comparison of performance and programmability between UPC and MPI was given in [23] for a realistic fluid dynamic implementation. For a general comparison between OpenMP, UPC and MPI programming, we refer to [25].…”
Section: Resultsmentioning
confidence: 99%
“…To name a few, Nakajima [5] described how to use a three-level hybrid programing model (vectorization, OpenMP, and MPI) to program efficiently on Earth Simulator. Shan et al [7] discussed the advantage of using hybrid MPI+OpenMP programming model for NAS parallel applications. Kaushik et al [4] investigated the performance of implicit PDF simulations for hybrid MPI+OpenMP programming model on a multicore architecture.…”
Section: Related Workmentioning
confidence: 99%