2009
DOI: 10.1007/978-3-642-03770-2_24
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of MPI, UPC and OpenMP on Multicore Architectures

Abstract: The current trend to multicore architectures underscores the need of parallelism. While new languages and alternatives for supporting more efficiently these systems are proposed, MPI faces this new challenge. Therefore, up-to-date performance evaluations of current options for programming multicore systems are needed. This paper evaluates MPI performance against Unified Parallel C (UPC) and OpenMP on multicore architectures. From the analysis of the results, it can be concluded that MPI is generally the best c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
33
1

Year Published

2011
2011
2019
2019

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 64 publications
(38 citation statements)
references
References 9 publications
4
33
1
Order By: Relevance
“…If shared memory is available on a particular hardware platform, notably multi-core processes, an MPI implementation may use it to provide effective interprocess communication for processes located on the cores of an individual CPU. While this is effective enough to compete head-on with OpenMP and UPC [7], the sole purpose of this optimization is to make efficient use of contemporary CPU designs. The primary unit of processing is and remains that of a process.…”
Section: Related Workmentioning
confidence: 99%
“…If shared memory is available on a particular hardware platform, notably multi-core processes, an MPI implementation may use it to provide effective interprocess communication for processes located on the cores of an individual CPU. While this is effective enough to compete head-on with OpenMP and UPC [7], the sole purpose of this optimization is to make efficient use of contemporary CPU designs. The primary unit of processing is and remains that of a process.…”
Section: Related Workmentioning
confidence: 99%
“…Early hybrid parallelism work focused on benchmarking well-known computational kernels [5], [6]. Subsequent work explored specific impacts of hybrid parallelism for visualization [7].…”
Section: A Hybrid Parallelismmentioning
confidence: 99%
“…The hybrid model allows data movement among nodes using traditional MPI motifs like scatter and gather, but within nodes using shared-memory parallelism via threaded frameworks like POSIX threads or OpenMP. Previous work comparing distributed-memory versus hybridmemory implementations (e.g., [21], [22]) has focused on benchmarking well-known computational kernels. In contrast, our study examines this space from the perspective of visualization algorithms.…”
Section: Hybrid Parallelismmentioning
confidence: 99%