2017 IEEE International Conference on Cluster Computing (CLUSTER) 2017
DOI: 10.1109/cluster.2017.60
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating a Burst Buffer Via User-Level I/O Isolation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 14 publications
0
12
0
Order By: Relevance
“…Thanks to the higher bandwidth of the Burst-Buffers, this improves the I/O transfer time while pipelining the (slowest) phase of sending/receiving data from the PFS with the compute phase of the application. However, as noted by Han et al [11], this idea may not be viable, as (i) Burst-Buffers are based on technologies that are extremely expensive with respect to hard drives and (ii) they are currently based on SSD technology, that is known to have a limited rewrite lifespan [11]. Thus, the large number of I/O operations in HPC applications would decrease their lifespan too fast.…”
Section: B Algorithms To Deal With Burst-buffersmentioning
confidence: 99%
See 1 more Smart Citation
“…Thanks to the higher bandwidth of the Burst-Buffers, this improves the I/O transfer time while pipelining the (slowest) phase of sending/receiving data from the PFS with the compute phase of the application. However, as noted by Han et al [11], this idea may not be viable, as (i) Burst-Buffers are based on technologies that are extremely expensive with respect to hard drives and (ii) they are currently based on SSD technology, that is known to have a limited rewrite lifespan [11]. Thus, the large number of I/O operations in HPC applications would decrease their lifespan too fast.…”
Section: B Algorithms To Deal With Burst-buffersmentioning
confidence: 99%
“…When running several such applications, even if the overall bandwidth is enough to cope in the long term with required data transfers, the bursty nature of both read and write operations and the lack of synchronization between applications induces I/O peaks, that in turn degrade the aggregated bandwidth, as noted in [7]. In this context, in order to cope with the limited I/O bandwidth of HPC system, Burst-Buffers have emerged as a promising solution [8], [9], [10], either as a cache between the computational nodes and the PFS so as to accelerate all data transfers (at the price of a limited lifetime [11]), and by acting as an intermediate storage used to delay write operations and to prefetch read operations, in order to avoid access conflicts and to hide contentions to the user by dealing smoothly with I/O peaks.…”
Section: Introductionmentioning
confidence: 99%
“…Thanks to the higher bandwidth of the Burst-Buffers, this has the advantage of improving the I/O transfer time while pipelining the (slowest) phase of sending/receiving data from the PFS with the compute phase of the application. However, as was noted by Han et al [16], this idea is not viable, (i) Burst-Buffers are based on technologies that are extremely expensive with respect to hard drives, (ii) they are currently based on SSD technology, that is known to have a limited rewrite lifespan [16]. Thus, the large number of I/O operations in HPC applications would decrease their lifespan too fast.…”
Section: B Algorithms To Deal With Burst-buffersmentioning
confidence: 99%
“…4) Repeatability: The same jobs are often run many times with different inputs, hence their compute-I/O pattern of an application can be reasonably predicted before execution. When modeling applications, most Burst-Buffer related work use workload models based on these patterns [18], [16], [11]. In addition to this, Mubarak et al [11] introduce a random background traffic representing HPC workloads such as graph computations and linear algebra solvers, based on the work of Yuan et al [22].…”
Section: Application Modelmentioning
confidence: 99%
See 1 more Smart Citation