Proceedings of the Tenth ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2005
DOI: 10.1145/1065944.1065950
|View full text |Cite
|
Sign up to set email alerts
|

An evaluation of global address space languages

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0
1

Year Published

2008
2008
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 76 publications
(15 citation statements)
references
References 13 publications
0
11
0
1
Order By: Relevance
“…For example, one can explicitly mark loops that have to be parallelized with annotations such as OpenMP [2] or OpenACC [3], or with an API such as OpenCL and Cuda. There are also approaches that rely on more fundamental language modifications such as Charm++ [4] or Partitioned Global Address Space (PGAS) languages such as UPC or Co-Array Fortran [5]. They embeds parallel constructs that aim to abstract actual machines.…”
Section: Target Specific Compilationmentioning
confidence: 99%
“…For example, one can explicitly mark loops that have to be parallelized with annotations such as OpenMP [2] or OpenACC [3], or with an API such as OpenCL and Cuda. There are also approaches that rely on more fundamental language modifications such as Charm++ [4] or Partitioned Global Address Space (PGAS) languages such as UPC or Co-Array Fortran [5]. They embeds parallel constructs that aim to abstract actual machines.…”
Section: Target Specific Compilationmentioning
confidence: 99%
“…The idea of using qualifiers to distinguish between shared and private memory originated in SIMD array languages [13], and is used in PGAS languages such as Titanium [14], Co-array Fortran and Unified Parallel C [15]. Similar storage qualifiers are used by CUDA and OpenCL to specify data locations in accelerators with hierarchical memory.…”
Section: Related Workmentioning
confidence: 99%
“…Distributed shared memory (DSM) systems model global shared memory using distributed local memory. In order to facilitate better mapping to distributed memory, some DSM systems like UPC [Coarfa05] differentiate between local data and shared global data. Although both types of memory may be accessed through the uniform interface of pointer dereferencing, global data typically has a longer access time.…”
Section: Related Workmentioning
confidence: 99%