2018
DOI: 10.1109/tcad.2017.2766156
|View full text |Cite
|
Sign up to set email alerts
|

DLV: Exploiting Device Level Latency Variations for Performance Improvement on Flash Memory Storage Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Unlike Fifer which looks at micro-service chains, Swayam is specifically catered for single-function machine learning inference services. Exploiting Slack: Exploiting slack between tasks is a wellknown technique, which has been applied in various domains of scheduling, including SSD controllers [31,37], memory controllers [41,67,77,89,96], and network-on-chip [32,62,72]. In contrast to exploiting slack, we believe the novelty aspect lies in identifying the slack in relevance to the problem…”
Section: Related Workmentioning
confidence: 99%
“…Unlike Fifer which looks at micro-service chains, Swayam is specifically catered for single-function machine learning inference services. Exploiting Slack: Exploiting slack between tasks is a wellknown technique, which has been applied in various domains of scheduling, including SSD controllers [31,37], memory controllers [41,67,77,89,96], and network-on-chip [32,62,72]. In contrast to exploiting slack, we believe the novelty aspect lies in identifying the slack in relevance to the problem…”
Section: Related Workmentioning
confidence: 99%
“…Chen et al [14] established a delay model to dispatch and schedule read/write requests separately according to the obtained delay value. Cui et al [15] proposed to schedule read requests according to retention time and write requests according to hotness, in order to be aware of device process variations. Liu et al [16] pointed that 3D flash memory suffered from reduced utilization of chip-level parallelism when the layer information is added, inducing sub-optimal parallel performance.…”
Section: Introductionmentioning
confidence: 99%
“…We refer to the number of read-retry operation as RL (Read-Level) below. Many research works have devoted to the optimization of large read latency in the aged SSD [11,12,13]. As the NAND flash adjusts the read voltage to sense the electrons in the cell incrementally, these methods proposed to keep track of the last successful RL for the future reads at different granularity(i.e., page-level, block-level, and layer-level).…”
Section: Introductionmentioning
confidence: 99%