2016
DOI: 10.15439/2016f52
|View full text |Cite
|
Sign up to set email alerts
|

A Parallel MPI I/O Solution Supported by Byte-addressable Non-volatile RAM Distributed Cache

Abstract: Abstract-While many scientific, large-scale applications are data-intensive, fast and efficient I/O operations have become of key importance for HPC environments. We propose an MPI I/O extension based on in-system distributed cache with data located in Non-volatile Random Access Memory (NVRAM) available in each cluster node. The presented architecture makes effective use of NVRAM properties such as persistence and byte-level access behind the MPI I/O API. Another advantage of the proposed solution is making de… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 19 publications
(19 reference statements)
0
4
0
Order By: Relevance
“…Moreover, the solution is aimed at applications that access small chunks of data (gain from byte addressing of NVRAM) from spread file locations (no drawback from omitting staging algorithms). As it was shown in previous papers [16,17], for long running and data intensive HPC applications that operate on small data parts, our solution performs better than unmodified MPI I/O. In this paper, we want to evaluate it with an application that does not meet those criteria strictly.…”
Section: Nvram Distributed Cache Architecturementioning
confidence: 92%
See 1 more Smart Citation
“…Moreover, the solution is aimed at applications that access small chunks of data (gain from byte addressing of NVRAM) from spread file locations (no drawback from omitting staging algorithms). As it was shown in previous papers [16,17], for long running and data intensive HPC applications that operate on small data parts, our solution performs better than unmodified MPI I/O. In this paper, we want to evaluate it with an application that does not meet those criteria strictly.…”
Section: Nvram Distributed Cache Architecturementioning
confidence: 92%
“…In 2016 we proposed the idea of NVRAM distributed cache located as an additional layer between a file system and a parallel distributed application [16]. The extension was transparent to the developer, because of its compatibility with the well-known Message Passing Interface (MPI) I/O API [18].…”
Section: Introductionmentioning
confidence: 99%
“…As we will show in experiments, performance of specific operations in regular MPI I/O and PFS could be significantly improved using our byte-addressable NVRAM distributed cache [23].…”
Section: Motivation and Goalmentioning
confidence: 93%
“…The main distinguishing features of the cache are fully decentralized management, prefetching the whole file during opening, synchronizing the whole file during closing and keeping minimal metadata. A set of tests with synthetic benchmarks and real-life applications like map searching, crowd simulation or graph processing proved I/O improved performance, especially for long running applications [21,22,23]. An additional feature of the cache that naturally benefits from NVRAM persistence is safety of the data during processing [24].…”
Section: Motivation and Goalmentioning
confidence: 99%