2008 IEEE International Conference on Cluster Computing 2008
DOI: 10.1109/clustr.2008.4663765
|View full text |Cite
|
Sign up to set email alerts
|

A dependency-aware task-based programming environment for multi-core architectures

Abstract: Abstract-Parallel programming on SMP and multi-core architectures is hard. In this paper we present a programming model for those environments based on automatic function level parallelism that strives to be easy, flexible, portable, and performant. Its main trait is its ability to exploit task level parallelism by analyzing task dependencies at run time. We present the programming environment in the context of algorithms from several domains and pinpoint its benefits compared to other approaches. We discuss i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
164
0
2

Year Published

2009
2009
2017
2017

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 188 publications
(166 citation statements)
references
References 22 publications
(27 reference statements)
0
164
0
2
Order By: Relevance
“…In their approach, the compiler generates code that scans and enumerates all vertices of the DAG at the beginning of the run-time execution. This has the same drawbacks as approaches, such as StarSS [21] and TBlas [24], that rely on pseudo-execution of the serial loops at run-time to dynamically discover dependencies between kernels. The overhead grows with the problem size and the scheduling is either centralized of replicated.…”
Section: Related Workmentioning
confidence: 98%
“…In their approach, the compiler generates code that scans and enumerates all vertices of the DAG at the beginning of the run-time execution. This has the same drawbacks as approaches, such as StarSS [21] and TBlas [24], that rely on pseudo-execution of the serial loops at run-time to dynamically discover dependencies between kernels. The overhead grows with the problem size and the scheduling is either centralized of replicated.…”
Section: Related Workmentioning
confidence: 98%
“…The idea is an extension to our previous work on speculative updates to shared memory locations using Software Transactional Memory (STM) [7]. We implement our idea in StarSs [10], a task based programming model with support for heterogeneity. StarSs has implementations for widely used multi-core architectures such as Symmetric Multiprocessors (SMP), the Cell Broadband Engine (Cell B./E.…”
Section: Listing 1: Example Pseudo Codementioning
confidence: 99%
“…We force the scheduler to run each task version at least λ times during the initial learning phase 4 . Once all tasks versions belonging to the same group of data set sizes have been run at least λ times, we consider that the scheduler has enough reliable information and it switches to the reliable information phase 5 for the given group of data set size. This means that the scheduler can have different criteria for the ready tasks that picks from the task graph, depending whether their corresponding group of data set size has enough reliable information or not.…”
Section: B Runtime Implementationmentioning
confidence: 99%
“…Programming models should be able to support this heterogeneity and hierarchy in such a way that the application is unaware of the underlying hardware and that can dynamically adapt to it. The OmpSs programming model combines ideas from OpenMP [4] and StarSs [5]: it enhances OpenMP with support for irregular and asynchronous parallelism and heterogeneous architectures. It incorporates data-flow concepts that allow the compiler/runtime to automatically move data as necessary and perform different kinds of optimizations.…”
Section: Introductionmentioning
confidence: 99%