2016 International SoC Design Conference (ISOCC) 2016
DOI: 10.1109/isocc.2016.7799757
|View full text |Cite
|
Sign up to set email alerts
|

A RAM cache approach using host memory buffer of the NVMe interface

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…In [22], Hong et al used a host DRAM as a data cache instead of an address mapping table cache by modifying the NVMe command process and adding a direct memory access (DMA) path between system memory and the host DRAM. The proposed scheme improved the I/O performance by 23% for sequential writes compared to an architecture with internal DRAM in the SSD.…”
Section: Plos Onementioning
confidence: 99%
“…In [22], Hong et al used a host DRAM as a data cache instead of an address mapping table cache by modifying the NVMe command process and adding a direct memory access (DMA) path between system memory and the host DRAM. The proposed scheme improved the I/O performance by 23% for sequential writes compared to an architecture with internal DRAM in the SSD.…”
Section: Plos Onementioning
confidence: 99%
“…They demonstrated that utilizing the HMB boosts the input/output operations per second (IOPS) performance significantly compared to other DRAMless solutions. In [8], Hong et al used a host DRAM as a data cache instead of an address mapping table cache by modifying the NVMe command process and adding a direct memory access (DMA) path between the system memory and the host DRAM. The proposed scheme improved the I/O performance by 23% for sequential writes over the architecture with internal DRAM in the SSD.…”
Section: B Hmb Of Nvme Interfacementioning
confidence: 99%