2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA) 2018
DOI: 10.1109/isca.2018.00041
|View full text |Cite
|
Sign up to set email alerts
|

FLIN: Enabling Fairness and Enhancing Performance in Modern NVMe Solid State Drives

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
42
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 73 publications
(42 citation statements)
references
References 67 publications
0
42
0
Order By: Relevance
“…The FTL splits each request, varying in size from 512 bytes to several megabytes, into multiple sub-requests in logical page units. The FTL stripes the sub-requests to multiple flash memory chips for internal parallelism, and schedules sub-request processing at the chip level [15], [32], [42], [49]. In the data processing phase, the processing elements, such as embedded processors or accelerators, execute in-storage processing functions and then return the results to the host.…”
Section: B Description On Request Flow Of In-storage Processingmentioning
confidence: 99%
See 1 more Smart Citation
“…The FTL splits each request, varying in size from 512 bytes to several megabytes, into multiple sub-requests in logical page units. The FTL stripes the sub-requests to multiple flash memory chips for internal parallelism, and schedules sub-request processing at the chip level [15], [32], [42], [49]. In the data processing phase, the processing elements, such as embedded processors or accelerators, execute in-storage processing functions and then return the results to the host.…”
Section: B Description On Request Flow Of In-storage Processingmentioning
confidence: 99%
“…However, data access and processing are still separated, making in-storage processing less efficient. On the other hand, there have also been various studies on in-storage scheduling [15], [21], [36], [49], [58]. Most of them improve the overall throughput by leveraging the parallel processing of flash memory.…”
Section: Introductionmentioning
confidence: 99%
“…This work groups requests to I/O based on row-buffer locality and focuses on inter-application request scheduling. Tavakkol et al (2018) proposed a flash level interferenceaware scheduler as an I/O request scheduling mechanism. Its goal is to provide fairness among requests using a three stage scheduling algorithm mitigating issues causing interference such as differences in request access patterns, the ratio of reads to writes, garbage collection.…”
Section: Schedulingmentioning
confidence: 99%
“…There are two primary ways to address this challenge: (i) Relying on device customization at hardware layer (e.g., Flash Translation Layer (FTL) or Open Channel) [1,[3][4][5][6][7][8][9], however, these solutions require special hardware supports and thus are hard to be applied to conventional SATA-based SSDs, which still dominate SSD market [10]. (ii) Relying on SSD-friendly I/O schedulers [11][12][13][14][15] which leverages SSD features (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…While these schedulers are in large part successful, they mostly ignore I/O request queueing which is an important layer in SATA-based SSDs. I/O request queueing, such as native command queueing (NCQ) [16] in SATA and submission queue (SQ) [9] in NVMe, can be considered as the junction of operating system and storage device. They are adopted to fully exploit parallelism in SSDs.…”
Section: Introductionmentioning
confidence: 99%