Proceedings of the Fifteenth Annual ACM Symposium on Parallel Algorithms and Architectures 2003
DOI: 10.1145/777412.777431
|View full text |Cite
|
Sign up to set email alerts
|

Integrated prefetching and caching in single and parallel disk systems

Abstract: We study integrated prefetching and caching in single and parallel disk systems. There exist two very popular approximation algorithms called Aggressive and Conservative for minimizing the total elapsed time in the single disk problem. For D parallel disks, approximation algorithms are known for both the elapsed time and stall time performance measures. In particular, there exists a D-approximation algorithm for the stall time measure that uses D−1 additional memory locations in cache.In the first part of the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2004
2004
2018
2018

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 26 publications
(52 reference statements)
0
13
0
Order By: Relevance
“…While the real life problem is on-line, and has been extensively studied by Cao et al [7], the offline problem has first been solved in 1998 [3], by the use of a linear program, for which it was shown that it always has an optimal integer solution, while not being totally unimodular. Later in 2000 [2], a polynomial time algorithm was given modeling the problem as a multi commodity flow with some postprocessing. Formally the problem can be defined as follows.…”
Section: Prefetchingmentioning
confidence: 99%
See 1 more Smart Citation
“…While the real life problem is on-line, and has been extensively studied by Cao et al [7], the offline problem has first been solved in 1998 [3], by the use of a linear program, for which it was shown that it always has an optimal integer solution, while not being totally unimodular. Later in 2000 [2], a polynomial time algorithm was given modeling the problem as a multi commodity flow with some postprocessing. Formally the problem can be defined as follows.…”
Section: Prefetchingmentioning
confidence: 99%
“…As a result, the tall/small job scheduling problem and the prefetch/caching problem can be solved in worst case time O(n 3 ) improving over respectively O(n 10 ) [4] and O * (n 18 ) [2]. Implementations are available from the authors home-pages.…”
Section: Introductionmentioning
confidence: 99%
“…Alternatively, dynamic prefetching has been proposed that detects applications' reference patterns at runtime, e.g., prediction using probability graphs [21,50], and time series modeling [47]. Prefetch algorithms tailored for parallel I/O systems also have been studied [2,30,31]. Speculative prefetching at the level of whole files or database objects has been proposed by many works [15,20,34,35,38].…”
Section: Related Workmentioning
confidence: 99%
“…Based on these interactions, a number of works have proposed integrated caching and prefetching schemes [2,11,30,31,32,39,46] that simultaneously identify and handle temporal and spatial I/O access patterns. FlexiCache [33] provides a new flexible interface that allows easy modification of disk cache management decisions using OS-level modules.…”
Section: Related Workmentioning
confidence: 99%
“…1 Part of this work was done while the author was on a leave at Bell Laboratories, Lucent Technologies, 600 Mountain Ave., Murray Hill, NJ 07974. tween the two was not well understood until the seminal work of Cao et al [6], who proposed to integrate caching with prefetching. They introduced the following execution model.…”
Section: Introductionmentioning
confidence: 99%