It is well known that after placing n balls independently and uniformly at random into n bins, the fullest bin holds Θ(log n/log log n) balls with high probability. More recently, Azar et al. analyzed the following process: randomly choose d bins for each ball, and then place the balls, one by one, into the least full bin from its d choices. Azar et al. They show that after all n balls have been placed, the fullest bin contains only log log n/log d+Θ(1) balls with high probability. We explore extensions of this result to parallel and distributed settings. Our results focus on the tradeoff between the amount of communication and the final load. Given r rounds of communication, we provide lower bounds on the maximum load of \Omega(\root r \of {\log{n}/\log\log{n}}) [Note to reader: see “View Article” for equation] for a wide class of strategies. Our results extend to the case where the number of rounds is allowed to grow with n. We then demonstrate parallelizations of the sequential strategy presented in Azar et al. that achieve loads within a constant factor of the lower bound for two communication rounds and almost match the sequential strategy given log log n/log d+O(d) rounds of communication. We also examine a parallel threshold strategy based on rethrowing balls placed in heavily loaded bins. This strategy achieves loads within a constant factor of the lower bound for a constant number of rounds, and it achieves a final load of at most O(log log n) given Ω(log log n) rounds of communication. The algorithm also works well in asynchronous environments © 1998 John Wiley & Sons, Inc. Random Structure Alg., 13, 159–188 (1998)
In this paper, we consider the problem of compressing graphs of the link structure of the World Wide Web. We provide e cient algorithms for such compression that are motivated by recently proposed random graph models for describing the Web. The algorithms are based on reducing the compression problem to the problem of nding a minimum spanning tree in a directed graph related to the original link graph. The performance of the algorithms on graphs generated by the random graph models suggests that by taking advantage of the link structure of the Web, one may achieve signi cantly better compression than natural Hu man-based schemes. We also provide hardness results demonstrating limitations on natural extensions of our approach.
In this paper, we address the problem of efficiently streaming a set of heterogeneous videos from a remote server through a proxy to multiple asynchronous clients so that they can experience playback with low startup delays. We determine the optimal proxy prefix cache allocation to the videos that minimizes the aggregate network bandwidth cost. We integrate proxy caching with traditional server-based reactive transmission schemes such as batching, patching and stream merging to develop a set of proxy-assisted delivery schemes. We quantitatively explore the impact of the choice of transmission scheme, cache allocation policy, proxy cache size, and availability of unicast versus multicast capability, on the resulting transmission cost. Our evaluations show that even a relatively small prefix cache (10%-20% of the video repository) is sufficient to realize substantial savings in transmission cost. We find that carefully designed proxy-assisted reactive transmission schemes can produce significant cost savings even in a predominantly unicast environment such as the Internet.
We address the problem of energy-efficient reliable wireless communication in the presence of unreliable or lossy wireless link layers in multi-hop wireless networks. Prior work [1] has provided an optimal energy efficient solution to this problem for the case where link layers implement perfect reliability. However, a more common scenario -a link layer that is not perfectly reliable, was left as an open problem. In this paper we first present two centralized algorithms, BAMER and GAMER, that optimally solve the minimum energy reliable communication problem in presence of unreliable links. Subsequently we present a distributed algorithm, DAMER, that approximates the performance of the centralized algorithm and leads to significant performance improvement over existing singlepath or multi-path based techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.