Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis 2021
DOI: 10.1145/3458817.3476180
|View full text |Cite
|
Sign up to set email alerts
|

Simurgh

Abstract: The availability of non-volatile main memory (NVMM) has started a new era for storage systems and NVMM specific file systems can support extremely high data and metadata rates, which are required by many HPC and data-intensive applications. Scaling metadata performance within NVMM file systems is nevertheless often restricted by the Linux kernel storage stack, while simply moving metadata management to the user space can compromise security or flexibility.This paper introduces Simurgh, a hardware-assisted user… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 92 publications
(73 reference statements)
0
3
0
Order By: Relevance
“…In cases of SATA/IDE, the target system employs a hardware controller (i.e., disk controller) to manage their storage interface protocol, so the interface driver usually handles I/O interrupt or system memory management. In contrast, in the case of NVMe, a kernel module (NVMe driver) [11], [54], [55] Arrakis [14], [15], [16] Ishiguro et al [29] Aerie [17] RUMA [56] NVMeDirect [12] Moneta-D [20] Direct-FUSE [18] Strata [30] Breeze [57] Simurgh [25] XFUSE [58] SplitFS [21] HyCache [59] Quill [26] Son et al [60], [61] ZoFS [22] Davram [62] vNVML [27], [28] EvFS [19] Kuco [63] DLFS [64] URFS [65] UMFS [31] DevFS [23] CrossFS [24] FSP [32] directly accesses the PCIe bus over a memory mapped I/O and issues the request to the target SSD by composing an nvme_rw_command.…”
Section: Os Storage Stackmentioning
confidence: 99%
See 2 more Smart Citations
“…In cases of SATA/IDE, the target system employs a hardware controller (i.e., disk controller) to manage their storage interface protocol, so the interface driver usually handles I/O interrupt or system memory management. In contrast, in the case of NVMe, a kernel module (NVMe driver) [11], [54], [55] Arrakis [14], [15], [16] Ishiguro et al [29] Aerie [17] RUMA [56] NVMeDirect [12] Moneta-D [20] Direct-FUSE [18] Strata [30] Breeze [57] Simurgh [25] XFUSE [58] SplitFS [21] HyCache [59] Quill [26] Son et al [60], [61] ZoFS [22] Davram [62] vNVML [27], [28] EvFS [19] Kuco [63] DLFS [64] URFS [65] UMFS [31] DevFS [23] CrossFS [24] FSP [32] directly accesses the PCIe bus over a memory mapped I/O and issues the request to the target SSD by composing an nvme_rw_command.…”
Section: Os Storage Stackmentioning
confidence: 99%
“…Moti et al [25] designs a user-space file system named Simurgh, which bases its core design on virtualizing NVM. Considering that NVM achieves similar performance to DRAM and is byte-addressable, Simurgh directly maps NVM into the address space of each application without employing DRAM to cache data and metadata from NVM.…”
Section: Virtualizationmentioning
confidence: 99%
See 1 more Smart Citation