2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PHD Forum 2011
DOI: 10.1109/ipdps.2011.385
|View full text |Cite
|
Sign up to set email alerts
|

Scout: High-Performance Heterogeneous Computing Made Simple

Abstract: Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 7 publications
(4 reference statements)
0
2
0
Order By: Relevance
“…Our code can also be compiled to Thrust's OpenMP or Intel TBB backends to run on multi-core CPUs (including the Xeon Phi). While Thrust essentially functions as a source-to-source translator, it may be possible to provide even more efficient support for data-parallelism using, for example, compiler optimizations (Jablin et al 2011). We use Thrust because it is a readily-available, easy-to-use, production-quality, open-source, and effective library, but our data-parallel algorithms should also be compatible with alternative data-parallel frameworks.…”
Section: Halo Analysis On the Gpumentioning
confidence: 99%
“…Our code can also be compiled to Thrust's OpenMP or Intel TBB backends to run on multi-core CPUs (including the Xeon Phi). While Thrust essentially functions as a source-to-source translator, it may be possible to provide even more efficient support for data-parallelism using, for example, compiler optimizations (Jablin et al 2011). We use Thrust because it is a readily-available, easy-to-use, production-quality, open-source, and effective library, but our data-parallel algorithms should also be compatible with alternative data-parallel frameworks.…”
Section: Halo Analysis On the Gpumentioning
confidence: 99%
“…We modified Clang's front-end including extending the lexer to add new keywords, added new AST types to represent meshes and parallel for, extended the parser and semantic analyzer to support our new statements, declarations, and expressions, and implemented their associated code generation to IR and interface to our runtime. We created an extension of DWARF to recognize Scout constructs and modified Clang and LLDB to support debugging of Scout statements and expressions [6,12]. Scout, like the current OpenMP functionality in Clang, differs from the previous approaches in that there is no new intermediate representation: the functionality is wired into the Clang front-end and generates LLVM IR and runtime library calls directly.…”
Section: Scoutmentioning
confidence: 99%