2021
DOI: 10.48550/arxiv.2111.06225
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Assigning and Scheduling Generalized Malleable Jobs under Subadditive or Submodular Processing Speeds

Abstract: Malleable scheduling is a model that captures the possibility of parallelization to expedite the completion of time-critical tasks. A malleable job can be allocated and processed simultaneously on multiple machines, occupying the same time interval on all these machines. We study a general version of this setting, in which the functions determining the joint processing speed of machines for a given job follow different discrete concavity assumptions. As we show, when the processing speeds are fractionally suba… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(9 citation statements)
references
References 30 publications
0
9
0
Order By: Relevance
“…Our main contribution is an O(1)-approximation for the Assignment problem with M ♮ -concave processing speeds (see Section 1.2 for definitions of submodularity and M ♮ -concavity). Because all M ♮ -concave functions are submodular, our results, together with the aforementioned transformation [10], imply the following theorem.…”
Section: Generalized Malleable Scheduling and Main Resultsmentioning
confidence: 63%
See 2 more Smart Citations
“…Our main contribution is an O(1)-approximation for the Assignment problem with M ♮ -concave processing speeds (see Section 1.2 for definitions of submodularity and M ♮ -concavity). Because all M ♮ -concave functions are submodular, our results, together with the aforementioned transformation [10], imply the following theorem.…”
Section: Generalized Malleable Scheduling and Main Resultsmentioning
confidence: 63%
“…Parallel execution of a job on multiple machines is often used to optimize the overall makespan in time-critical task scheduling systems. Practical applications are numerous and diverse, varying from task scheduling in production and logistics, such as quay crane allocation in naval logistics [4,13] and cleaning activities on trains [3], to optimizing the performance of computationally demanding tasks, such as web search index update [27] and training neural networks [11] (see also [9,10] for further references and examples).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Parallel execution of a job on multiple machines is often used to optimize the overall makespan in time-critical task scheduling systems. Practical applications are numerous and diverse, varying from task scheduling in production and logistics, such as quay crane allocation in naval logistics [4,15] and cleaning activities on trains [3], to optimizing the performance of computationally demanding tasks, such as web search index update [28] and training neural networks [12] (see also [9,10] for further references and examples).…”
Section: Introductionmentioning
confidence: 99%
“…However, as recently observed in [10], the aforementioned models, in which the processing power of a heterogeneous set of machines is expressed by a single scalar, cannot capture the (possibly complicated) combinatorial interaction effects arising among different machines processing the same job. Practical settings where such complicated interdependencies among machines may arise include modern heterogeneous parallel computing systems, typically consisting of CPUs, GPUs, and I/O nodes [5], and highly distributed processing systems, where massive parallelization is subject to constraints imposed by the underlying communication network [1]; see [10] for further references and examples. Having such practical settings in mind, Fotakis et al [10] introduced a generalized malleable scheduling model, where the processing time f j (S) = 1/g j (S) of a job j depends on a job-specific processing speed function g j (S) of the set of machines S allocated to j.…”
Section: Introductionmentioning
confidence: 99%