2017 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID) 2017
DOI: 10.1109/ccgrid.2017.81
|View full text |Cite
|
Sign up to set email alerts
|

Implementation and Evaluation of One-Sided PGAS Communication in XcalableACC for Accelerated Clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Instead, an advantage of the language extension is that new features can be added without being restricted by functions of the original language. For example, the XACC C language also supports coarray features inherited from Fortran 2008 to issue one-sided communication in ease (Tabuchi et al, 2017).…”
Section: Related Researchmentioning
confidence: 99%
“…Instead, an advantage of the language extension is that new features can be added without being restricted by functions of the original language. For example, the XACC C language also supports coarray features inherited from Fortran 2008 to issue one-sided communication in ease (Tabuchi et al, 2017).…”
Section: Related Researchmentioning
confidence: 99%
“…This method achieved better performance than OpenACC + MPI/InfiniBand with the Himeno benchmark, and we demonstrated that the XACC high-level communication description has the capacity to be hardware independent and facilitate high-performance programming. Moreover, we proposed the XACC local-view model [16], which employs a coarray feature for onesided communication on accelerator memories. We implemented both global-view and local-view models using MPI, and evaluated their performance and productivity with the Himeno benchmark and NAS Parallel Benchmarks CG benchmark, as well as discussing their proper use.…”
Section: Related Workmentioning
confidence: 99%
“…We used the PGI compiler as the OpenACC implementation and MVAPICH2 as the MPI implementation. We also used the Omni XACC compiler [10,16], which is a source-to-source XACC compiler based on the Omni compiler infrastructure [14], where Figure 9 shows the compilation flow. An input XACC code was translated into an OpenACC code with XACC runtime calls by using the XACC translator.…”
Section: Performancementioning
confidence: 99%
See 1 more Smart Citation