The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2015
DOI: 10.1016/j.parco.2014.11.004
|View full text |Cite
|
Sign up to set email alerts
|

Parallelizing with BDSC, a resource-constrained scheduling algorithm for shared and distributed memory systems

Abstract: We introduce a new parallelization framework for scientific computing based on BDSC, an efficient automatic scheduling algorithm for parallel programs in the presence of resource constraints on the number of processors and their local memory size. BDSC extends Yang and Gerasoulis's Dominant Sequence Clustering (DSC) algorithm; it uses sophisticated cost models and addresses both shared and distributed parallel memory architectures. We describe BDSC, its integration within the PIPS compiler infrastructure and i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 25 publications
0
6
0
Order By: Relevance
“…In this section, we present a concise description of some of the previously proposed clustering based task scheduling algorithms. Some well‐known previous algorithms are EZ (Edge Zeroing), LC (Linear Clustering), DSC (Dominant Sequence Clustering), GDS (Greedy Dominant Sequence), MCP (Modified Critical Path), DCP (Dynamic Critical Path), CASC (Clustering Algorithm for Synchronous Communication), CPPS (Cluster Pair Priority Scheduling), CCLC (Computation Communication Load Clustering), DCCL (Dynamic Computation Communication Load), RDCC (Randomized Dynamic Computation Communication), BDSC (Bounded Dominant Sequence Clustering), and LOCAL …”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this section, we present a concise description of some of the previously proposed clustering based task scheduling algorithms. Some well‐known previous algorithms are EZ (Edge Zeroing), LC (Linear Clustering), DSC (Dominant Sequence Clustering), GDS (Greedy Dominant Sequence), MCP (Modified Critical Path), DCP (Dynamic Critical Path), CASC (Clustering Algorithm for Synchronous Communication), CPPS (Cluster Pair Priority Scheduling), CCLC (Computation Communication Load Clustering), DCCL (Dynamic Computation Communication Load), RDCC (Randomized Dynamic Computation Communication), BDSC (Bounded Dominant Sequence Clustering), and LOCAL …”
Section: Related Workmentioning
confidence: 99%
“…The duplication-based algorithms 2,24-29 attempt to minimize the communication cost between tasks through duplication of tasks onto different processors. The clustering-based algorithms, 3,16,[30][31][32][33][34][35][36][37][38][39][40][41][42][43][44] mainly applicable for an unlimited quantity of processors, are divided into 2 steps. In the initial step, massively communicating tasks of a given task graph are combined into a cluster.…”
mentioning
confidence: 99%
“…(iii) They accepted its adequacy over a genuine VM-encouraged bunch environment under diverse levels of rivalry. In a similar manner, Khaldi et al [17] clarified the parallelization system for logical processing focused around BDSC, a proficient programmed planning strategy for parallel projects in the vicinity of asset stipulations on the quantity of processors and their neighborhood memory size. BDSC augments Yang and Gerasoulis' dominant sequence clustering (DSC) strategy; it uses complex expense models and locations both imparted and appropriated parallel memory architectures.…”
Section: Literature Surveymentioning
confidence: 99%
“…The search direction can be calculated from the head angle by using the polar and Cartesian coordinate transformation using eqs. (15)- (17). In the GSO algorithm, at the k th iteration, the producer X P behaves as follows:…”
Section: : Evaluation Of Fitness Functionmentioning
confidence: 99%
“…Furthermore, PIPS uses an integer polyhedral abstraction to represent the domains of the program variables, which turns out very effective in the parallelization context. PIPS contains an automatic task scheduler based on the BDSC algorithm [33], which is an improvement of DSC. The BDSC output can be an input of our automatic distribution of sequential code.…”
Section: A Pips a Source-to-source Compilermentioning
confidence: 99%