2013 IEEE International Symposium on Parallel &Amp; Distributed Processing, Workshops and PHD Forum 2013
DOI: 10.1109/ipdpsw.2013.217
|View full text |Cite
|
Sign up to set email alerts
|

Composing Multiple StarPU Applications over Heterogeneous Machines: A Supervised Approach

Abstract: Enabling HPC applications to perform efficiently when invoking multiple parallel libraries simultaneously is a great challenge. Even if a single runtime system is used underneath, scheduling tasks or threads coming from different libraries over the same set of hardware resources introduces many issues, such as resource oversubscription, undesirable cache flushes or memory bus contention.This paper presents an extension of StarPU, a runtime system specifically designed for heterogeneous architectures, that allo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0
1

Year Published

2014
2014
2019
2019

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 22 publications
0
18
0
1
Order By: Relevance
“…This contribution suffered from the fact that it does not allow to dynamically change the number of resources assigned to a parallel kernel. Our contribution in this study is a generalization of a previous work [16], where we introduced the so-called scheduling contexts which aim at structuring the parallelism for complex applications. Although our runtime system is able to cope with several flavors of inner parallelism (OpenMP, Pthreads, StarPU) simultaneously, in this paper we focus on the use of OpenMP to implement internal task parallelism.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This contribution suffered from the fact that it does not allow to dynamically change the number of resources assigned to a parallel kernel. Our contribution in this study is a generalization of a previous work [16], where we introduced the so-called scheduling contexts which aim at structuring the parallelism for complex applications. Although our runtime system is able to cope with several flavors of inner parallelism (OpenMP, Pthreads, StarPU) simultaneously, in this paper we focus on the use of OpenMP to implement internal task parallelism.…”
Section: Related Workmentioning
confidence: 99%
“…The approach we propose in this paper to tackle the granularity problem is based on resource aggregation: instead of dynamically splitting tasks, we rather aggregate resources to process coarse grain tasks in a parallel manner on the critical resource, the CPU. To deal with Direct Acyclic Graphs (DAGs) of parallel tasks, we have enhanced the StarPU runtime system (see [8,16]) to cope with parallel tasks, the implementation of which relies on another parallel runtime system (e.g. OpenMP).…”
Section: Introductionmentioning
confidence: 99%
“…More general approaches propose complete integrated frameworks that exploit lower-level specific programming models. Some examples include OM-PICUDA [9], Cashemere [6], StarPU [7] or the skeleton programming framework based on it, SkePU [2]. PACXX [4] is a transformation system integrated into the LLVM compiler framework.…”
Section: Related Workmentioning
confidence: 99%
“…StarPU relies on an hypervisor to dynamically chose which implementation of a kernel will be more suitable for the target hardware resources. Moreover, it uses a dynamic resource allocation with scheduling contexts [6]. The use of a global hypervisor may mitigate scheduling problems, however tasks are still coded with classic runtimes, thus creating situations where runtime stacking issues may appear.…”
Section: Related Workmentioning
confidence: 99%