A commonly employed abstraction for studying the object placement problem for the purpose of Internet content distribution is that of a distributed replication group. In this work the initial model of distributed replication group of Leff, Wolf, and Yu (IEEE TPDS '93) is extended to the case that individual nodes act selfishly, i.e., cater to the optimization of their individual local utilities. Our main contribution is the derivation of equilibrium object placement strategies that: (a) can guarantee improved local utilities for all nodes concurrently as compared to the corresponding local utilities under greedy local object placement; (b) do not suffer from potential mistreatment problems, inherent to centralized strategies that aim at optimizing the social utility; (c) do not require the existence of complete information at all nodes. We develop a baseline computationally efficient algorithm for obtaining the aforementioned equilibrium strategies and then extend it to improve its performance with respect to fairness. Both algorithms are realizable in practice through a distributed protocol that requires only limited exchange of information.
Abstract-The effectiveness of service provisioning in largescale networks is highly dependent on the number and location of service facilities deployed at various hosts. The classical, centralized approach to determining the latter would amount to formulating and solving the uncapacitated k-median (UKM) problem (if the requested number of facilities is fixed), or the uncapacitated facility location (UFL) problem (if the number of facilities is also to be optimized). Clearly, such centralized approaches require knowledge of global topological and demand information, and thus do not scale and are not practical for large networks. The key question posed and answered in this paper is the following: "How can we determine in a distributed and scalable manner the number and location of service facilities?" We propose an innovative approach in which topology and demand information is limited to neighborhoods, or balls of small radius around selected facilities, whereas demand information is captured implicitly for the remaining (remote) clients outside these neighborhoods, by mapping them to clients on the edge of the neighborhood; the ball radius regulates the trade-off between scalability and performance. We develop a scalable, distributed approach that answers our key question through an iterative reoptimization of the location and the number of facilities within such balls. We show that even for small values of the radius (1 or 2), our distributed approach achieves performance under various synthetic and real Internet topologies that is comparable to that of optimal, centralized approaches requiring full topology and demand information.
Abstract-Large scale hierarchical caches for web content have been deployed widely in an attempt to reduce delivery delays and bandwidth consumption and also to improve the scalability of content dissemination through the world wide web. Irrespectively of the specific replacement algorithm employed in each cache, a de facto characteristic of contemporary hierarchical caches is that a hit for a document at an ¢ -level cache leads to the caching of the document in all intermediate caches (levels) on the path towards the leaf cache that received the initial request. This paper presents various algorithms that revise this standard behavior and attempt to be more selective in choosing the caches that get to store a local copy of the requested document. As these algorithms operate independently of the actual replacement algorithm running in each individual cache, they are referred to as meta algorithms. Three new meta algorithms are proposed and compared against the de facto one and a recently proposed one by H. Che, Y. Tung, and Z. Wang [1] by means of synthetic and trace-driven simulations. The best of the new meta algorithms appears to be able to lead to improved performance under most simulated scenarios, especially under a low availability of storage. The latter observation makes the presented meta algorithms particularly favorable for the handling of large data objects such as stored music files or short video clips. Additionally, a simple load balancing algorithm that is based on the concept of meta algorithms is proposed and evaluated. The algorithm is shown to be able to provide for an effective balancing of load thus possibly addressing the recently discovered "filtering-effect" in hierarchical web caches (C. Williamson [2]).
The addition of storage capacity in network nodes for the caching or replication of popular data objects results in reduced end-user delay, reduced network traffic, and improved scalability.The problem of allocating an available storage budget to the nodes of a hierarchical content distribution system is formulated; optimal algorithms, as well as fast/efficient heuristics, are developed for its solution. An innovative aspect of the presented approach is that it combines all relevant subproblems, concerning node locations, node sizes, and object placement, and solves them jointly in a single optimization step. The developed algorithms may be utilized in content distribution networks that employ either replication or caching/replacement. In addition to reducing the average fetch distance for the requested content, they also cater to load balancing and workload constraints on a given node. Strictly hierarchical, as well as hierarchical with peering, request routing models are considered.
ontinuous media are characterized by well defined temporal relationships between subsequent media units (MUs). Information is only conveyed when these temporal relationships are preserved at presentation time (if altered during the transportation they need to be reconstructed prior to presentation). The reconstruction of temporal relationships between MUs of the same stream is referred to as intrastream synchronization. For video presentations, the temporal relationship refers to the spacing between subsequent frames, which is dictated by the frame production rate, typically 25 or 30 frames/s. For packet audio, the basic MU is a voice sample, and the spacing between voice samples is determined by the sampling process. Temporal relationships also exist between MUs that belong to different streams, when these streams are to be consumed concurrently, as in an orchestrated audiovisual presentation (the lip synchronization problem). The problem of synchronization between different but related streams is called interstream synchronization and is outside the scope of this article. For intermedia synchronization issues, the reader is referred to [1][2][3][4].A packet media receiver consists of a playout buffer for the temporary storage of incoming MUs and a playout scheduler for the presentation of MUs. The role of the scheduler is to provide a presentation schedule that resembles as much as possible the temporal relationships that were created by the encoding process. In doing so, the scheduler employs MU buffering, the extent of which is bounded by the maximum end-to-end delay tolerance of the application. Bidirectional applications such as desktop videoconferencing place very strict latency requirements, typically a few hundreds of milliseconds. On the other hand, unidirectional applications such as video on demand (VOD) allow for much larger latencies that range from around 1 s for responsive Web-based distribution of short video clips to several minutes in near-VOD systems. All the proposed schemes provide for some compromise between the intrastream synchronization quality and the increase of end-to-end delay due to the buffering of MUs. At the two extremes of this continuum of choices we have the bufferless scheduler, which provides for minimal stream delay by presenting frames as soon as they arrive, and the assured synchronization method, which completely eliminates the effects of jitter at the expense of a long stream delay.In what follows we attempt to provide a structured presentation of proposed playout schedulers by examining the way they tackle the fundamental trade-off between the synchronization quality and imposed delay. Alongside the operational comparison of different schemes, an effort is made to indicate their suitability for different real-world applications. The remainder of the article is organized as follows. Some background material and an outline are presented. We discuss the appropriateness of the various schemes for different media types. We present the family of time-oriented playout schedul...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.