2014 Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing 2014
DOI: 10.1109/wolfhpc.2014.8
|View full text |Cite
|
Sign up to set email alerts
|

PTG: An Abstraction for Unhindered Parallelism

Abstract: Increased parallelism and use of heterogeneous computing resources is now an established trend in High Performance Computing (HPC), a trend that, looking forward to Exascale, seems bound to intensify. Despite the evolution of hardware over the past decade, the programming paradigm of choice was invariably derived from Coarse Grain Parallelism with explicit data movements. We argue that message passing has remained the de facto standard in HPC because, until now, the ever increasing challenges that application … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
27
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 39 publications
(27 citation statements)
references
References 33 publications
(29 reference statements)
0
27
0
Order By: Relevance
“…The PTG programming paradigm—which enables explicit dataflow representation of programs—proposes a completely different path from the way parallel applications have been designed and developed up to the present. The PTG decouples the expression of parallelism in the algorithm from the control flow ordering, data distribution, and load balance . Despite the lower startup overhead of implicit dataflow paradigms in terms of development effort (ie, simply submitting tasks in the sequential flow of the original code), the significance of the increased implementation effort from the PTG becomes apparent when comparing the superior performance of the explicit dataflow version of the CC to the implicit dataflow version and to the traditional CC computation.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…The PTG programming paradigm—which enables explicit dataflow representation of programs—proposes a completely different path from the way parallel applications have been designed and developed up to the present. The PTG decouples the expression of parallelism in the algorithm from the control flow ordering, data distribution, and load balance . Despite the lower startup overhead of implicit dataflow paradigms in terms of development effort (ie, simply submitting tasks in the sequential flow of the original code), the significance of the increased implementation effort from the PTG becomes apparent when comparing the superior performance of the explicit dataflow version of the CC to the implicit dataflow version and to the traditional CC computation.…”
Section: Discussionmentioning
confidence: 99%
“…The PTG decouples the expression of parallelism in the algorithm from the control flow ordering, data distribution, and load balance. 4 Despite the lower startup overhead of implicit dataflow paradigms in terms of development effort (ie, simply submitting tasks in the sequential flow of the original code), the significance of the increased implementation effort from the PTG becomes apparent when comparing the superior performance of the explicit dataflow version of the CC to the implicit dataflow version and to the traditional CC computation. The PTG version of the CC outperforms the original CC version by a significant margin-to be precise, by a factor of 2.6 on 32 nodes.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…It allows building of graphs using the PTG model, but it also supports various other features such as memory allocation management and parallel containers. PaRSEC (Danalis et al, 2014) is another RS based on the PTF model that had been demonstrated to be effective in various scientific applications. CHARM++ (Kale & Krishnan, 1993) is a C++-based parallel programming system.…”
Section: Task-based Parallelizationmentioning
confidence: 99%
“…The StarPU [5] and OmpSs [10] frameworks extend the C compiler and allow the programmer to use compiler directives to define C functions as tasks kernels and describe their data dependencies. The PaRSEC [9,11] framework provides tools and utilities to analyze a program written in a special language that describes tasks and data dependencies and us es a source to source compiler to translate the optimal solution into a C code for compilation. The DuctTeip [22], Chunks and Tasks [17] and also StarPU [2] frameworks provide Application Programming Interface (API) for defining data and tasks to run in a distributed memory environment.…”
Section: Introductionmentioning
confidence: 99%