1988
DOI: 10.1007/bf01762111
|View full text |Cite
|
Sign up to set email alerts
|

Competitive snoopy caching

Abstract: Abstract. In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
315
0

Year Published

1993
1993
2017
2017

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 550 publications
(325 citation statements)
references
References 8 publications
0
315
0
Order By: Relevance
“…Consider the representation of r (k) , (21). It is clear that r (k) < M/m since the competitive ratio M/m is attained by the trivial strategy that trades all dollars in the minimum possible rate, m, and the threat-based algorithm certainly performs strictly better.…”
Section: Proof As Usual Suppose Thatmentioning
confidence: 99%
“…Consider the representation of r (k) , (21). It is clear that r (k) < M/m since the competitive ratio M/m is attained by the trivial strategy that trades all dollars in the minimum possible rate, m, and the threat-based algorithm certainly performs strictly better.…”
Section: Proof As Usual Suppose Thatmentioning
confidence: 99%
“…Such a modeling corresponds to non-marking algorithms, as it does not allow the algorithm to use information beyond the set of pages that are currently in the cache. In the course of applying our implementation of the symbolic algorithm to paging, we have realized that the only nonmarking competitive algorithm for paging we are aware of, Flush-when-Full (FWF) [BEY98,KMRS88], is not lazy (also referred to as "demand paging" in [BEY98]); that is, it may evict from the cache more than a single page in case an eviction is required. From a practical standpoint, such evictions are wasteful, and a reasonable implementation of FWF would keep the cache full at all times and only mark the pages spuriously evicted by FWF -thus treating FWF as a marking algorithm.…”
Section: Symbolic Model-checking Algorithmmentioning
confidence: 99%
“…The data stream model appears to be related to other work e.g., on competitive analysis [69], or I/O efficient algorithms [98]. However, it is more restricted in that it requires that a data item can never again be retrieved in main memory after its first pass (if it is a one-pass algorithm).…”
Section: The Data Stream Computation Modelmentioning
confidence: 99%