In this paper the idea that large objects, such as video files, should not be cached or replaced in their entirety, but rather be partitioned in chunks and replacement decisions be applied at the chunk level is examined. It is shown, that a higher byte hit ratio (BHR) can be achieved through partial replacement. The price paid for the improved BHR performance is that the replacement algorithm, e.g. LRU, takes a longer time to induce the steady state BHR. It is demonstrated that this problem could be addressed by a hybrid caching scheme that employs variable sized chunks; the use of small chunks leads to the maximization of BHR in periods of stable video popularity, while large chunks are used when extreme popularity changes occur to assist the fast convergence to the new steady state BHR.
One of the most widely considered cache replacement policies is Least Recently Used (LRU) based on which many other policies have been developed. LRU has been studied analytically in the literature under the assumption that the object requests are independent. However, such an assumption does not seem to be in agreement with recent studies of Web-traces, which indicate the existence of short term correlations among the requests. This paper introduces an approximate analysis that fairly accurately predicts the hit ratio of the LRU policy in the case of short term correlations. The approximation approach is based on the relation between the working set model and LRU, while the request generation process is assumed to follow a recently proposed model for Web-traces, which captures short term correlations among the requests. The accuracy of the introduced approximate analysis is validated for synthetic as well as real Web-traces.
Abstract-In this paper the problem of Call Admission Control (CAC) is considered for leaky bucket constrained sessions with deterministic service guarantees (zero loss and finite delay bound), served by a Generalized Processor Sharing scheduler at a single node in the presence of best effort traffic. Based on an optimization process a CAC algorithm capable of determining the (unique) optimal solution is derived. The derived algorithm is also applicable, under a slight modification, in a system where the best effort traffic is absent and is capable of guaranteeing that if it does not find a solution to the CAC problem, then a solution does not exist. The numerical results indicate that the CAC algorithm can achieve a significant improvement on bandwidth utilization as compared to a (deterministic) effective bandwidth-based CAC scheme.
Abstract-In this paper, a Delay Tolerant Network environment is considered where the source is in full control of the two-hop spreading mechanism by setting key parameters such as the number of copies allowed to be spread in the network and the delay bound of the messages. The analysis allows for a differentiation between the source of the message and the intermediate nodes (in terms of e.g. transmission power or speed). Analytical expressions for the cumulative distribution function (cdf ) of the delivery delay and the induced overhead are extracted, taking into account the fact that the source node may continue spreading copies after the message delivery. In addition, a fairly accurate approximate expression for the cdf of the delivery delay is also derived and validated through simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.