012 IEEE 28th Symposium on Mass Storage Systems and Technologies (MSST) 2012
DOI: 10.1109/msst.2012.6232370
|View full text |Cite
|
Sign up to set email alerts
|

vPFS: Bandwidth virtualization of parallel storage systems

Abstract: Abstract-Existing parallel file systems are unable to differentiate I/Os requests from concurrent applications and meet per-application bandwidth requirements. This limitation prevents applications from meeting their desired Quality of Service (QoS) as high-performance computing (HPC) systems continue to scale up. This paper presents vPFS, a new solution to address this challenge through a bandwidth virtualization layer for parallel file systems. vPFS employs user-level parallel file system proxies to interpos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…S-CAVE developers [17] propose to leverage the unique position of the hypervisor in order to efficiently share SSD caches between multiple VMs. Similarly, vPFS [18] introduces a bandwidth virtualization layer for parallel file systems that schedules parallel I/Os from different applications based on configurable policies. Unlike our approach, the focus in this context is bandwidth isolation between multiple clients, as opposed to elasticity.…”
Section: Introductionmentioning
confidence: 99%
“…S-CAVE developers [17] propose to leverage the unique position of the hypervisor in order to efficiently share SSD caches between multiple VMs. Similarly, vPFS [18] introduces a bandwidth virtualization layer for parallel file systems that schedules parallel I/Os from different applications based on configurable policies. Unlike our approach, the focus in this context is bandwidth isolation between multiple clients, as opposed to elasticity.…”
Section: Introductionmentioning
confidence: 99%
“…The variety of access patterns exhibited by HPC applications has lead modern HPC clusters to observe high levels of I/O interference and performance degradation, inhibiting their ability to achieve predictable and controlled I/O performance [62,121]. While several efforts were made to prevent I/O contention and performance degradation of HPC infrastructures (e.g., QoS provisioning [116,123], I/O flow, and job scheduling optimizations [44,101]), neither have considered the path of end-to-end enforcement of storage policies nor system-wide flow optimizations. To this end, SDS systems have been recently introduced to HPC environments.…”
Section: High-performance Computing Infrastructuresmentioning
confidence: 99%
“…Ohta et al [74] take it further by including a handle-based round-robin scheduling algorithm. A data layout aware scheduler for the I/O forwarding layer is proposed by Xu et al [110] to provide proportional sharing between applications.…”
Section: I/o Schedulingmentioning
confidence: 99%