Proceedings of the 50th Hawaii International Conference on System Sciences (2017) 2017
DOI: 10.24251/hicss.2017.750
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Task Parallel Frameworks based on Implicit Dependencies in Multi-core Environments

Abstract: Abstract-The larger flexibility that task parallelism offers with respect to data parallelism comes at the cost of a higher complexity due to the variety of tasks and the arbitrary patterns of dependences that they can exhibit. These dependencies should be expressed not only correctly, but optimally, i.e. avoiding over-constraints, in order to obtain the maximum performance from the underlying hardware. There have been many proposals to facilitate this non-trivial task, particularly within the scope of nowaday… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
1
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 30 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…The result is also a testament to the high degree of optimization of the standard mode. In fact, as one would expect, the kernel of UPC++ DepSpawn in standard mode is very much based on the runtime of DepSpawn, which showed to be on par with state of the art tools such as OpenMP in [10].…”
Section: Runtime Overheadsmentioning
confidence: 98%
See 2 more Smart Citations
“…The result is also a testament to the high degree of optimization of the standard mode. In fact, as one would expect, the kernel of UPC++ DepSpawn in standard mode is very much based on the runtime of DepSpawn, which showed to be on par with state of the art tools such as OpenMP in [10].…”
Section: Runtime Overheadsmentioning
confidence: 98%
“…Its main responsibility is the scheduling of these tasks once DepSpawn informs TBB that they are ready for execution. Finally, it deserves to be mentioned that DepSpawn, which only supports shared-memory environments, has been compared in terms of performance and programmability to some of the most relevant alternatives in this field achieving satisfactory results [10].…”
Section: Task Parallelism With Depspawnmentioning
confidence: 99%
See 1 more Smart Citation
“…This way the new library supports dataflow execution in a distributed memory system managed by UPC++. The library integrates also the shared memory runtime of the original DepSpawn, which was compared to state of the art approaches in [14] providing good results, thus enabling in addition task parallelism within each UPC++ process.…”
Section: Upc++ Depspawnmentioning
confidence: 99%
“…This paradigm has repeatedly shown to largely improve the performance of critical mathematical algorithms, [9][10][11] therefore being of great practical interest. While several tools support this strategy, particularly in shared memory environments, 12 there were no PGAS alternative that supported it until the development of UPC++ DepSpawn. 13 This library enables dataflow computing on top of UPC++, 8 another library that implements and extends in C++ the Unified Parallel C (UPC) 3 language.…”
Section: Introductionmentioning
confidence: 99%