2002
DOI: 10.1007/3-540-47906-6_17
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Replacement Granularity on Video Caching

Abstract: In this paper the idea that large objects, such as video files, should not be cached or replaced in their entirety, but rather be partitioned in chunks and replacement decisions be applied at the chunk level is examined. It is shown, that a higher byte hit ratio (BHR) can be achieved through partial replacement. The price paid for the improved BHR performance is that the replacement algorithm, e.g. LRU, takes a longer time to induce the steady state BHR. It is demonstrated that this problem could be addressed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2003
2003
2013
2013

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 10 publications
(11 reference statements)
0
10
0
Order By: Relevance
“…In contrast, the respon- siveness of the FCS with probabilistic LRU will be far worst than that of FCS with plain LRU presented in figure 15, since under the probabilistic LRU, only a small fraction of requests is taken into account. Based on the findings about the tradeoff between efficiency and adaptability to demand changes discussed in figures 19, 20 and more analytically in [13], the difference in the response time and the relatively small difference in the cost reduction ratio under no demand changes, make the Chunk Based Differentiation scheme an attractive solution when demand changes occur.…”
Section: The Chunk Based Differentiation Schemementioning
confidence: 98%
See 1 more Smart Citation
“…In contrast, the respon- siveness of the FCS with probabilistic LRU will be far worst than that of FCS with plain LRU presented in figure 15, since under the probabilistic LRU, only a small fraction of requests is taken into account. Based on the findings about the tradeoff between efficiency and adaptability to demand changes discussed in figures 19, 20 and more analytically in [13], the difference in the response time and the relatively small difference in the cost reduction ratio under no demand changes, make the Chunk Based Differentiation scheme an attractive solution when demand changes occur.…”
Section: The Chunk Based Differentiation Schemementioning
confidence: 98%
“…By employing access cost dependent chunk sizes an overall access cost reduction is achieved. Part of the work presented here was first introduced in [13].…”
Section: Introductionmentioning
confidence: 99%
“…Examples of partial caching are caching of a prefix [26], prefix and selected frames [17,15], prefix assisted periodic broadcast of popular videos [10], optimal proxy prefix allocation integrated with server-based reactive transmission (batching, patching, stream-merging) [28], bursty parts of a video [32], hotspot segments [9], popularity-based prefix [22], segment-based prefix caching [30], variable sized chunk based video caching [2] and distributed architectures for partial caching [1,3] of a video. Some of these caching schemes do not use any dynamic replacement but use periodic cache decisions, e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Video segments are ejected from the cache according to a caching value which is based on how recently they have been requested and their distance from the beginning of the video file. In [11], the video segmentation approach is studied in general, and the trade-off between BHR and responsiveness in popularity changes is examined for fixed and variable video segmentation schemes. However, the authors in [11] apply a simple LRU replacement policy to videos that are not currently played and in contrast to [9] they do not adopt the idea of dedicating a portion of the cache for caching prefixes of the various videos.…”
Section: Introductionmentioning
confidence: 99%
“…As far as sole proxy caching is concerned, research has been focused in two directions: (1) the effective maximization of delivered quality to heterogeneous clients (see [5][6][7]) and (2) the reduction of server loads, network traffic and user access latencies (see [8][9][10][11][12][13]). …”
Section: Introductionmentioning
confidence: 99%