Proceedings 10th IEEE International Symposium on High Performance Distributed Computing
DOI: 10.1109/hpdc.2001.945198
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative caching middleware for cluster-based servers

Abstract: We consider the use of cooperative caching to manage the memories of cluster-based servers. Over the last several years, a number of researchers have proposed locality-conscious servers that implement content-aware request distribution to address this problem [2,17,4,5,6]. During this development, it has become conventional wisdom that cooperative caching cannot match the performance of these servers [17]. Unfortunately, while locality-conscious servers provide very high performance, their request distribution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 24 publications
(18 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…Current and near future network technologies clearly indicates that such a net-based virtual memory may perform much better than disk-based ones. Literature reports very good results both for general DSMs [2,3] and for specific Web applications [4][5][6]. A large and fast data repository may be used as a cache facility to improve performance of I/O bound applications, and as a primary storage facility for CPU bound applications running out-of-core on a single PE memory.…”
Section: Hoc Design Principlesmentioning
confidence: 99%
“…Current and near future network technologies clearly indicates that such a net-based virtual memory may perform much better than disk-based ones. Literature reports very good results both for general DSMs [2,3] and for specific Web applications [4][5][6]. A large and fast data repository may be used as a cache facility to improve performance of I/O bound applications, and as a primary storage facility for CPU bound applications running out-of-core on a single PE memory.…”
Section: Hoc Design Principlesmentioning
confidence: 99%
“…In similar spirit, memcached [7] provides an API to access a large distributed in-memory key-value store. In non-virtualized settings, the use of memory from other machines to support large memory workloads has been explored earlier [19], [20], [21], [22], [23], [8], [24], primarily in 1990s. However, these systems did not address the comprehensive the design and performance considerations in using cluster-wide memory for virtual machine workloads.…”
Section: F Local De-duplicationmentioning
confidence: 99%
“…Prior approaches to overcome the disk I/O bottleneck have examined the use of memory-resident databases [5], [6] and caching [7], [8] techniques. Gray and Putzolu [9] predicted in 1987 that "main memory will be begin to look like secondary storage to processors and their secondary caches".…”
Section: Introductionmentioning
confidence: 99%
“…In LRU replacement, a block is placed on the top of the stack when the block is accessed, and it is removed when it reaches the bottom and another block not in the stack is accessed. 2 If not reused, a block will move down in the stack as new blocks are accessed. Thus the distance of a block from the top of the stack to its current position in the stack defines its age or recency-how many distinct blocks have been subsequently accessed.…”
Section: A Locality-aware Protocolmentioning
confidence: 99%
“…To avoid a bottleneck at the server, the home-based strategy uses a peer-to-peer distributed index, similar to the distributed hash table used to organize a structured peer-to-peer system. As another example, cooperative caching is used to manage cached data in cluster-based servers, each with its own hard disks [2]. It is also used to improve the scalability of network file systems [1].…”
Section: Other Related Workmentioning
confidence: 99%