SC16: International Conference for High Performance Computing, Networking, Storage and Analysis 2016
DOI: 10.1109/sc.2016.74
|View full text |Cite
|
Sign up to set email alerts
|

Elastic Multi-resource Fairness: Balancing Fairness and Efficiency in Coupled CPU-GPU Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…Reference [27] proposed an AlloX strategy for achieving efficient prediction of access-side machine learning business resources, thereby enabling rational utilization of GPU and CPU resources and reducing the cost of CPU/GPU data centers. The authors of [28] study an elastic multi-resource allocation strategy based on a coupled CPU-GPU shelf, which provides resource availability while better ensuring user fairness. In [29], a deep Q-learning resource prediction and scheduling algorithm for GPU is proposed, which designed three prototypes of resource management systems, the simulation results show significant improvements in resource utilization compared to ordinary heuristics.…”
Section: A Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Reference [27] proposed an AlloX strategy for achieving efficient prediction of access-side machine learning business resources, thereby enabling rational utilization of GPU and CPU resources and reducing the cost of CPU/GPU data centers. The authors of [28] study an elastic multi-resource allocation strategy based on a coupled CPU-GPU shelf, which provides resource availability while better ensuring user fairness. In [29], a deep Q-learning resource prediction and scheduling algorithm for GPU is proposed, which designed three prototypes of resource management systems, the simulation results show significant improvements in resource utilization compared to ordinary heuristics.…”
Section: A Related Workmentioning
confidence: 99%
“…The heterogeneous computing resource scheduling has been studied in [22], [23], [24], and [25], but it mainly considers the static resource scheduling scenario, the temporary resource switching scenario is ignored. Meanwhile, although resource prediction has effectively improved the flexible scheduling ability of computing resources in [26], [27], [28], [29], and [30], but not consider actual costs and profit from the perspective of operators and how to maximize the benefits of computing operations. Besides, the current research on resource scheduling based on game theory is mainly oriented to cloud computing power pricing, network elements, and other fields [31], [32], [33], and [34].…”
Section: B Motivation and Contributionsmentioning
confidence: 99%
“…Barik et al (2014) tried to map irregular C++ applications to the GPU device on heterogeneous processors. Fairness and efficiency are two major concerns for shared system users; Tang et al (2016) introduced multi-resource fairness and efficiency on heterogeneous processors. Zhang et al (2017a) considered the irregularity inside workload and architecture differences between CPUs and GPUs, and then proposed a method that can distribute the relatively regular part of workload to GPUs while remain irregular part to CPUs on integrated architectures.…”
Section: Accelerating Irregular Applications On Heterogeneous Processorsmentioning
confidence: 99%
“…Some recent work has addressed coupled environments where the cores and the co-processor units are integrated on a single chip [11,18,20,21]. For instance, in [18], to show the effects of using an integrated GPU, the authors propose specialized scan and aggregation operations.…”
Section: Related Workmentioning
confidence: 99%