2022
DOI: 10.1109/tpds.2021.3097884
|View full text |Cite
|
Sign up to set email alerts
|

Accelerating HDF5 I/O for Exascale Using DAOS

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…A recent study [20] explored the performance of HDF5 over the DAOS object interface. This object-centric design enabled HDF5 to transition away from traditional blockbased storage, thereby circumventing the constraints posed by POSIX.…”
Section: H Warpx Laser Wakefield Simulationmentioning
confidence: 99%
“…A recent study [20] explored the performance of HDF5 over the DAOS object interface. This object-centric design enabled HDF5 to transition away from traditional blockbased storage, thereby circumventing the constraints posed by POSIX.…”
Section: H Warpx Laser Wakefield Simulationmentioning
confidence: 99%
“…Examples include images of varying sizes (one megabyte to several hundred megabytes), matrices based on integers, floats or double precision numbers, or very long text files consisting of DNA sequences. Previously, efforts like ROOT (Antcheva et al, 2011), HDF5 (Soumagne et al, 2022) and similar file formats have been used to exploit the underlying structure to improve the overall application's performance. These successful efforts clearly highlight that scientific datasets need special attention and the performance of the scientific applications can be significantly improved by exploiting the underlying structure in the datasets.…”
Section: Scientific Datasetsmentioning
confidence: 99%
“…Previous results for local and distributed persistent memory file systems with NVM are also promising for several real-world applications, exemplifying significant improvements over current state-of-the-art NVMe SSD storage. However, most of the benchmarks employed [21], [22], [23], [24], [25] are not scientific workflows or are not based on common stressors for scientific patterns in supercomputing such as BTIO [26] with all of its variants under MPI. Furthermore, these works do not compare local and distributed storage over RDMA, and they also do not compare to common storage devices such as HDD or SATA-SSD, which are still very common in largescale distributed storage systems.…”
Section: Related Workmentioning
confidence: 99%