24th International Conference on Distributed Computing Systems, 2004. Proceedings. 2004
DOI: 10.1109/icdcs.2004.1281581
|View full text |Cite
|
Sign up to set email alerts
|

ULC: a file block placement and replacement protocol to effectively exploit hierarchical locality in multi-level buffer caches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
43
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(43 citation statements)
references
References 4 publications
0
43
0
Order By: Relevance
“…Reuse distance has also been introduced into the management of hierarchical multi-level caches in a clientserver system using the Unified Level-aware Caching (ULC) protocol [24]. There are two major differences between the ULC protocol and the LAC protocol presented here.…”
Section: Related Work On Reuse Distance and Localitymentioning
confidence: 99%
See 1 more Smart Citation
“…Reuse distance has also been introduced into the management of hierarchical multi-level caches in a clientserver system using the Unified Level-aware Caching (ULC) protocol [24]. There are two major differences between the ULC protocol and the LAC protocol presented here.…”
Section: Related Work On Reuse Distance and Localitymentioning
confidence: 99%
“…server protocols such as ULC [24] and DEMOTE [19]. Even if this redundancy were effectively addressed, it would not reduce the value of cooperative caching on the client side because cooperative caching allows cache size to increase in proportion to the number of clients, while the effective cache size of the server for each client decreases with increasing number of clients.…”
Section: General Workload Performancementioning
confidence: 99%
“…Gniady et al [13] and Kim et al [14] partition the shared file system cache between multiple processes by detecting the access patterns. Jiang and Zhang [9] partition server buffers dynamically among the clients in accordance to their working set sizes. Gill and Modha [27] partition the cache dynamically among sequential and random I/O streams in order to reduce the read misses in the cache.…”
Section: Related Workmentioning
confidence: 99%
“…We refer to the kernel buffer cache that resides in main memory as the "storage cache." Extensive research [2], [3], [4], [5], [6], [7], [8], [9] has been done on improving the effectiveness of storage caching. However, little research has been done on providing quality of service (QoS) guarantees to multiple applications that exercise storage caches.…”
Section: Introductionmentioning
confidence: 99%
“…One major problem is that the locality information embedded in the streams of access requests from clients is not consistently analyzed and exploited, resulting in globally nonsystematic, and therefore suboptimal, placement and replacement of cached blocks across the hierarchy. We have proposed a coordinated multilevel cache management protocol based on consistent accesslocality quantification and show its effectiveness in different platforms [10,11].…”
Section: Buffer Management In Multilevel Caching Systemsmentioning
confidence: 99%