We investigate speculative prefetching under a model in which prefet tilng is neither aborted nor preempted by demand fetch but instead gets equal priority in network bandwidth uti~iation. We argue that the non-abortive assumption is appropriate for wireless networks where bandwidth is low and latency is high, and the nonpreemptive assumption is appropriate for kternet where prioritfiation is not always possible. This paper w sumes the a~istence of an access model to provide some knowledge about future accesses and bvestigates an-alytica~y the performance of a prefetcher that uttit h~knowledge h mobfle computing, because resourca are severely constrained, performance prediction is as important as acc=s prediction. For uniform retried time, we derive a theoretical tit of improvement in access time due to prefetching. This leads to the formulation of an optimal dgorithrn for prefetching one access ahead. For non-uniform retrieval time, two dHerent typm of prefetching of mtitiple documents, namely maintine and branch prefetch, are etiuated against prf etch of s-mgledocument. In mtine prefetch, the most probable sequence of future accesses is prefetched. h branch prefetch, a set of dflerent alternatives for future accesses is prefetched. Under some conditions, mainfine prefetch may give stight improvement in userperceived access time over single prefetch with nominal extra retrieval cost, where retrieval cost is defined as the expected network t-me wasted in non-use~prf etch. Branch prefetch performs better than retie prefetch but incurs more retried cost.
Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modelling. We use improvement in access time as the performance metric, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). The performance maximisation problem is expressed as a stretch knapsack problem. We develop an algorithm to maximise the improvement in access time by solving the stretch knapsack problem, using theoretically proven apparatus to reduce the search space. Integration between speculative prefetching and caching is also investigated, albeit under the assumption of equal item sizes.
To improve the accuracy of access prediction, a prefetcber for web browsing should recognize the fact that a web page is a compound. By this term we mean that a user request for a single web page may require the retrieval of several multimedia items. Our prediction algorithm builds an access graph that captures the dynamics of web navigation rather than merely attaching probabilities to hypertext structure. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. The paper takes a different approach. Specifically, it models the performance of the prefetcher and develops a prefetch policy based on a theoretical analysis of the model. In the analysis, we derive a formula for the expected improvement in access time when prefetch is performed in anticipation for a compound request. We then develop an algorithm that integrates prefetch and cache replacement decisions so as to maximize this improvement. We present experimental results to demonstrate the effectiveness of compound-based prefetching in low bandwidth networks.
Speculative prefetching has been proposed to improve the response time of network access. Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modeling. We analyze the performance of a prefetcher that has uncertain knowledge about future accesses. Our performance metric is the improvement in access time, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). We develop a prefetch algorithm to maximize the improvement in access time. The algorithm is based on finding the best solution to a stretch knapsack problem, using theoretically proven apparatus to reduce the search space. An integration between speculative prefetching and caching is also investigated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.