Proceedings of the Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures 2005
DOI: 10.1145/1073970.1073983
|View full text |Cite
|
Sign up to set email alerts
|

Scheduling malleable tasks with precedence constraints

Abstract: In this paper we propose an approximation algorithm for scheduling malleable tasks with precedence constraints. Based on an interesting model for malleable tasks with continuous processor allotments by Prasanna and Musicus [22,23,24], we define two natural assumptions for malleable tasks: the processing time of any malleable task is non-increasing in the number of processors allotted, and the speedup is concave in the number of processors. We show that under these assumptions the work function of any malleable… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
33
0

Year Published

2005
2005
2014
2014

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 32 publications
(33 citation statements)
references
References 22 publications
0
33
0
Order By: Relevance
“…GRAD achieves O(1)-competitiveness with respect to makespan for job sets with arbitrary release times, and O(1)-competitiveness with respect to mean response time for batched job sets where all jobs are released simultaneously. Unlike many previous results, which either assume clairvoyance [24,19,25] or use instantaneous parallelism [11,9], GRAD removes these restrictive assumptions. Moreover, because the quantum length can be adjusted to amortize the cost of context-switching during processor reallocation, GRAD provides effective control over the scheduling overhead and ensures efficient utilization of processors.…”
Section: Introductionmentioning
confidence: 94%
See 1 more Smart Citation
“…GRAD achieves O(1)-competitiveness with respect to makespan for job sets with arbitrary release times, and O(1)-competitiveness with respect to mean response time for batched job sets where all jobs are released simultaneously. Unlike many previous results, which either assume clairvoyance [24,19,25] or use instantaneous parallelism [11,9], GRAD removes these restrictive assumptions. Moreover, because the quantum length can be adjusted to amortize the cost of context-switching during processor reallocation, GRAD provides effective control over the scheduling overhead and ensures efficient utilization of processors.…”
Section: Introductionmentioning
confidence: 94%
“…One major issue of parallel job scheduling is how to efficiently share multiple processors among a number of competing jobs, while ensuring each job a required quality of services (see e.g. [15,8,7,11,9,18,24,19,25,22,31,28,23,13,14,5]). Efficiency and fairness are two important performance measures, where efficiency is often quantified in terms of makespan and mean response time.…”
Section: Introductionmentioning
confidence: 99%
“…We used the algorithm described in [5] for homogenous multiprocessor scheduling. A series-parallel reduction of the graph is applied whenever possible [9, p. 30].…”
Section: Task Parallelismmentioning
confidence: 99%
“…GRAD achieves O(1)-competitiveness with respect to makespan for job sets with arbitrary release times, and O(1)-competitiveness with respect to mean response time for batched job sets where all jobs are released simultaneously. Unlike many previous results, which either assume clairvoyance [29,21,31] or use instantaneous parallelism [10,6], GRAD removes these restrictive assumptions. Moreover, because the quantum length can be adjusted to amortize the cost of context-switching during processor reallocation, GRAD provides effective control over the scheduling overhead and ensures efficient utilization of processors.…”
mentioning
confidence: 94%