2007
DOI: 10.1145/1272998.1273017
|View full text |Cite
|
Sign up to set email alerts
|

Competitive prefetching for concurrent sequential I/O

Abstract: During concurrent I/O workloads, sequential access to one I/O stream can be interrupted by accesses to other streams in the system. Frequent switching between multiple sequential I/O streams may severely affect I/O efficiency due to long disk seek and rotational delays of disk-based storage devices. Aggressive prefetching can improve the granularity of sequential data access in such cases, but it comes with a higher risk of retrieving unneeded data. This paper proposes a competitive prefetching strategy that c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2009
2009
2019
2019

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(12 citation statements)
references
References 25 publications
0
12
0
Order By: Relevance
“…However, such improvements need tools to analyse the whole software stack and the kernel -middleware interactions. The paper by Chuanpeng [13] talks about how to handle concurrent sequential I/O streams in a Virtual Machine setup and explains a similar issue in the VM middleware.…”
Section: B Bandwidth Differentiationmentioning
confidence: 99%
“…However, such improvements need tools to analyse the whole software stack and the kernel -middleware interactions. The paper by Chuanpeng [13] talks about how to handle concurrent sequential I/O streams in a Virtual Machine setup and explains a similar issue in the VM middleware.…”
Section: B Bandwidth Differentiationmentioning
confidence: 99%
“…Many prefetching designs have emerged to hide I/O latencies [35,36,9,10]. Our work is orthogonal to prefetching but can be used alongside such techniques.…”
Section: Related Workmentioning
confidence: 99%
“…Recent studies address these issues with various caching techniques. Examples include: (i) adding new metrics for cache replacement policies such as frequency [7], (ii) passing application hints that describe future accesses to the storage server [8], and (iii) identifying future data access patterns and prefetching blocks accordingly [9,10]. These ideas focus on the data that applications use and estimate which data blocks should be cached to improve hit rates.…”
Section: Introductionmentioning
confidence: 99%
“…Even with a better replication approach, a streaming server handling concurrent access by many users, can still incur in slow disk I/O since the size of the files does not fit on main memory which affects throughput severely. The latter problem is not new and has been studied in the literature before in the form of smarter ways to fetch information from secondary storage [2], [3].…”
Section: Introductionmentioning
confidence: 99%