2001
DOI: 10.1109/90.944339
|View full text |Cite
|
Sign up to set email alerts
|

Analysis of Web caching architectures: hierarchical and distributed caching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
151
0
2

Year Published

2004
2004
2017
2017

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 196 publications
(155 citation statements)
references
References 14 publications
1
151
0
2
Order By: Relevance
“…Regular Q-ary trees are commonly used for the derivation of numerical results for algorithms operating on trees [37,10]; it has also be seen that numerical results from regular tree topologies are in good accordance with experimental results from actual internet tree topologies [38]. The entire set of parameters (demand and topology) for each experiment is indicated in the title of the corresponding graph.…”
Section: Numerical Results Under Igreedymentioning
confidence: 57%
See 1 more Smart Citation
“…Regular Q-ary trees are commonly used for the derivation of numerical results for algorithms operating on trees [37,10]; it has also be seen that numerical results from regular tree topologies are in good accordance with experimental results from actual internet tree topologies [38]. The entire set of parameters (demand and topology) for each experiment is indicated in the title of the corresponding graph.…”
Section: Numerical Results Under Igreedymentioning
confidence: 57%
“…These percentages become much larger under unequal client request rates λ j . Table 3 presents some examples with λ j 's taken from a uniform distribution in [1,10]. A reduction as large as 40% over equal-share, and 42% over big-top, can be achieved by the allocation under iGreedy.…”
Section: Using Igreedy With Caching/replacementmentioning
confidence: 99%
“…However, it is not scalable and has several drawbacks [17]. First, every hierarchy layer introduces additional delay; second, a lot of redundant document copies are stored at every hierarchy level, and third, higher level caches tend to become bottlenecks.…”
Section: Motivationsmentioning
confidence: 99%
“…With this in mind, Krishnan et al (2000) imposed their so-called Full Independence Assumption under which a cache has an effective hit rate of zero after having encountered an equal sized cache previously. Rodriguez et al (2001) extended this to the case of two successive caches on a shortest path to the server having hit rates of α and β where β > α, the effective cache of the second (larger) cache now being β -α. To account for the case of a larger cache being encountered first we further extend this slightly by assigning an effective hit rate of…”
Section: Hierarchical Cache Locationmentioning
confidence: 99%