2005
DOI: 10.1155/2005/914081
|View full text |Cite
|
Sign up to set email alerts
|

Execution Model of Three Parallel Languages: OpenMP, UPC and CAF

Abstract: Abstract. The aim of this paper is to present a qualitative evaluation of three state-of-the-art parallel languages: OpenMP, Unified Parallel C (UPC) and Co-Array Fortran (CAF). OpenMP and UPC are explicit parallel programming languages based on the ANSI standard. CAF is an implicit programming language. On the one hand, OpenMP designs for shared-memory architectures and extends the base-language by using compiler directives that annotate the original source-code. On the other hand, UPC and CAF designs for dis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
6
0

Year Published

2008
2008
2019
2019

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…The naive UPC implementation in Section 3.2 shows the easy programmability of UPC that is fully comparable with OpenMP, as discussed in e.g. [20]. The first code transformation, in form of explicit thread privatization shown in Section 4.1, may be done by automated code translation.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The naive UPC implementation in Section 3.2 shows the easy programmability of UPC that is fully comparable with OpenMP, as discussed in e.g. [20]. The first code transformation, in form of explicit thread privatization shown in Section 4.1, may be done by automated code translation.…”
Section: Resultsmentioning
confidence: 99%
“…This can be precisely calculated when the values of m and n, as well as the thread grid layout, are known. Formula(20) is essentially the same as (13) from Section 5.2.5. It is commented that S local thread in(20) denotes the total volume of all local messages (in both horizontal and vertical directions) per thread, likewise denotes S remote thread the total volume of all remote messages, with C remote thread denoting the number of remote messages per thread.…”
mentioning
confidence: 99%
“…The design of a parallel programming model that meets all of these expectations is still out of reach [2, 10-12, 21, 22, 24, 28]. Still, despite the many obstacles, there has been some progress in parallel programming that brings us closer to the desired parallel programming model [12].…”
Section: Introductionmentioning
confidence: 99%
“…However, despite the many obstacles, there has been enough progress in parallel programming to bring us closer to the desired parallel language [23]. Two main parallel programming paradigms are widely used for developing scientific and engineering applications: shared-memory (SM) and distributed-shared-memory (DSM) parallel programming models.…”
Section: Introductionmentioning
confidence: 99%