Proceedings of the ACM SIGOPS 22nd Symposium on Operating Systems Principles 2009
DOI: 10.1145/1629575.1629589
|View full text |Cite
|
Sign up to set email alerts
|

Better I/O through byte-addressable, persistent memory

Abstract: Modern computer systems have been built around the assumption that persistent storage is accessed via a slow, block-based interface. However, new byte-addressable, persistent memory technologies such as phase change memory (PCM) offer fast, fine-grained access to persistent storage.In this paper, we present a file system and a hardware architecture that are designed around the properties of persistent, byteaddressable memory. Our file system, BPFS, uses a new technique called short-circuit shadow paging to pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
455
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 687 publications
(457 citation statements)
references
References 25 publications
1
455
0
1
Order By: Relevance
“…NV-heaps [6] provides some language level features that are not provided in FP-Heap such as safe point and garbage collection. PMFS [1], Aerie [16], BPFS [20], and SCMFS [21] all propose using file system to manage non-volatile memory. Traditional file system works well on low-speed devices.…”
Section: Related Workmentioning
confidence: 99%
“…NV-heaps [6] provides some language level features that are not provided in FP-Heap such as safe point and garbage collection. PMFS [1], Aerie [16], BPFS [20], and SCMFS [21] all propose using file system to manage non-volatile memory. Traditional file system works well on low-speed devices.…”
Section: Related Workmentioning
confidence: 99%
“…The average overhead of a clflush and mfence combined together is reported to be 250ns [14], which makes this approach costly, given that persistent memory access times are expected to be on the order of tens to hundreds of nanoseconds [3,4,7]. 1 The two instructions flush dirty data blocks from the CPU cache to persistent memory and wait for the completeness of all memory writes, and incur high overhead in persistent memory [10,14,32,33].…”
Section: Mitigating the Ordering Overheadmentioning
confidence: 99%
“…Several works tried to mitigate the ordering overhead in persistent memory with hardware support [10,32,33,34,35]. These can be classified into two approaches:…”
Section: Mitigating the Ordering Overheadmentioning
confidence: 99%
See 2 more Smart Citations