2014 IEEE 7th International Conference on Cloud Computing 2014
DOI: 10.1109/cloud.2014.90
|View full text |Cite
|
Sign up to set email alerts
|

GPU Passthrough Performance: A Comparison of KVM, Xen, VMWare ESXi, and LXC for CUDA and OpenCL Applications

Abstract: Abstract-As more scientific workloads are moved into the cloud, the need for high performance accelerators increases. Accelerators such as GPUs offer improvements in both performance and power efficiency over traditional multi-core processors; however, their use in the cloud has been limited. Today, several common hypervisors support GPU passthrough, but their performance has not been systematically characterized.In this paper we show that low overhead GPU passthrough is achievable across 4 major hypervisors a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
33
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 66 publications
(33 citation statements)
references
References 15 publications
0
33
0
Order By: Relevance
“…The next step was moving GPU's to the cloud within the "Infrastructure as a Service" framework. However, even if many research efforts have been spent so far, A mazon is probably the only known example of a ma jor OTT provider offering access to GPU-enabled services to its customers [13]. However, when moved to the NFV context, the problem of v irtualizing h/w accelerators must be addressed with a much wider scope: such resources, in fact, must be managed at all levels, fro m the offering of accelerated VNF's to the customers, to the lowest-level technology aspects at the infrastructure level.…”
Section: Nfv and H/w Accelerationmentioning
confidence: 99%
See 3 more Smart Citations
“…The next step was moving GPU's to the cloud within the "Infrastructure as a Service" framework. However, even if many research efforts have been spent so far, A mazon is probably the only known example of a ma jor OTT provider offering access to GPU-enabled services to its customers [13]. However, when moved to the NFV context, the problem of v irtualizing h/w accelerators must be addressed with a much wider scope: such resources, in fact, must be managed at all levels, fro m the offering of accelerated VNF's to the customers, to the lowest-level technology aspects at the infrastructure level.…”
Section: Nfv and H/w Accelerationmentioning
confidence: 99%
“…In the virtualization context, the problem of virtualizing a GPU is now well-known, and can be stated as follows: a guest Virtual Machine (VM), running on a hardware platform provided with GPU-based accelerators, must be able to concurrently and independently access the GPU's, without incurring in security issues [13], [14]. Many techniques to achieve GPU virtualizat ion have been presented.…”
Section: A Gpu Virtualizationmentioning
confidence: 99%
See 2 more Smart Citations
“…Accessing one or more GPUs within a virtual machine is typically accomplished by one of two strategies: 1) via API remoting with device emulation; or 2) using PCI passthrough. We characterize GPGPU performance within virtual machines across two hardware systems, 4 hypervisors, and 3 application sets [6]. KVM again performs well across both the Delta and Bespin systems.…”
Section: IIImentioning
confidence: 99%