2019
DOI: 10.1002/cpe.5571
|View full text |Cite
|
Sign up to set email alerts
|

Heuristics for concurrent task scheduling on GPUs

Abstract: Summary Concurrent execution of tasks in GPUs can reduce the computation time of a workload by overlapping data transfer and execution commands. However, it is difficult to implement an efficient runtime scheduler that minimizes the workload makespan as many execution orderings should be evaluated. In this paper, we employ scheduling theory to build a model that takes into account the device capabilities, workload characteristics, constraints, and objective functions. In our model, GPU tasks scheduling is refo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…López‐Albelda et al 6 presented a task scheduling model for GPUs, which is based on the flow shop scheduling problem. The model can predict the execution times of tasks, which use CUDA streams to launch multiple independent kernels to execute in the same GPU context.…”
mentioning
confidence: 99%
“…López‐Albelda et al 6 presented a task scheduling model for GPUs, which is based on the flow shop scheduling problem. The model can predict the execution times of tasks, which use CUDA streams to launch multiple independent kernels to execute in the same GPU context.…”
mentioning
confidence: 99%