Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 2015
DOI: 10.1145/2807591.2807636
|View full text |Cite
|
Sign up to set email alerts
|

Multi-objective job placement in clusters

Abstract: One of the key decisions made by both MapReduce and HPC cluster management frameworks is the placement of jobs within a cluster. To make this decision, they consider factors like resource constraints within a node or the proximity of data to a process. However, they fail to account for the degree of collocation on the cluster's nodes. A tight process placement can create contention for the intra-node shared resources, such as shared caches, memory, disk, or network bandwidth. A loose placement would create les… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 55 publications
0
13
0
Order By: Relevance
“…Calling omp_set_dop blocks the caller until changing the DoP takes effect. 3 Communicating with the SCALO daemon. For the third extension, the parallel runtime creates a dedicated management thread to communicate with the SCALO daemon.…”
Section: Implementing Instrumentation and Adaptivity In The Openmp Rumentioning
confidence: 99%
See 1 more Smart Citation
“…Calling omp_set_dop blocks the caller until changing the DoP takes effect. 3 Communicating with the SCALO daemon. For the third extension, the parallel runtime creates a dedicated management thread to communicate with the SCALO daemon.…”
Section: Implementing Instrumentation and Adaptivity In The Openmp Rumentioning
confidence: 99%
“…A single, parallel program is often limited in how effectively it can use this increasing hardware parallelism. Co-locating jobs, that is co-executing multiple parallel programs, on a node can increase throughput and energy efficiency [3][4][5]27]. As the parallelism continues to increase, efficient execution of some workloads will require node sharing [27], and some systems already enable users to share a node by statically partitioning cores between different jobs.…”
mentioning
confidence: 99%
“…Our approach is based on a multi-objective scheduling algorithm focusing on minimizing a weighted sum of objectives. The advantage of such approach is that it is automatically guided by predetermined weights while the disadvantage is that it is hard to determine the right values for the weights [10]. In contrast, a posteriori methods produce a Pareto front of solutions without predetermined values [10].…”
Section: Related Workmentioning
confidence: 99%
“…The advantage of such approach is that it is automatically guided by predetermined weights while the disadvantage is that it is hard to determine the right values for the weights [10]. In contrast, a posteriori methods produce a Pareto front of solutions without predetermined values [10]. Each solution is better than the others with respect to at least one objective and users can choose one from the produced solutions.…”
Section: Related Workmentioning
confidence: 99%
“…Multi-objective optimization. We utilize the weighted sum approach to transform the multiobjective optimization problem to a single objective optimization problem, which has been widely used in the previous studies [120,74]. Denote the weights for resource efficiency, job latency and fairness are W e , W q and W f , respectively.…”
Section: Motivating Examplesmentioning
confidence: 99%