Recent interest in supporting packet-audio applications over wide area networks has been fueled by the availability of low-cost, toll-quality workstation audio and the demonstration that limited amounts of interactive audio can be supported by today's Internet. In such applications, received audio packets are buffered, and their playout delayed at the destination host in order to compensate for the variable network delays. The authors investigate the performance of four different algorithms for adaptively adjusting the playout delay of audio packets in an interactive packet-audio terminal application, in the face of such varying network delays. They evaluate the playout algorithms using experimentally-obtained delay measurements of audio traffic between several different Internet sites. Their results indicate that an adaptive algorithm which explicitly adjusts to the sharp, spike-like increases in packet delay which were observed in the traces can achieve a lower rate of lost packets for both a given average playout delay and a given maximum buffer size
Neighbor discovery is one of the first steps in the initialization of a wireless ad hoc network. In this paper, we design and analyze practical algorithms for neighbor discovery in wireless networks. We first consider an ALOHA-like neighbor discovery algorithm in a synchronous system, proposed in an earlier work. When nodes do not have a collision detection mechanism, we show that this algorithm reduces to the classical Coupon Collector's Problem. Consequently, we show that each node discovers all its n neighbors in an expected time equal to ne(ln n+c), for some constant c. When nodes have a collision detection mechanism, we propose an algorithm based on receiver status feedback which yields a ln n improvement over the ALOHA-like algorithm.Our algorithms do not require nodes to have any estimate of the number of neighbors. In particular, we show that not knowing n results in no more than a factor of two slowdown in the algorithm performance. In the absence of node synchronization, we develop asynchronous neighbor discovery algorithms that are only a factor of two slower than their synchronous counterparts. We show that our algorithms can achieve neighbor discovery despite allowing nodes to begin execution at different time instants. Furthermore, our algorithms allow each node to detect when to terminate the neighbor discovery phase.
Sender-initiated reliable multicast protocols, based on the use of positive acknowledgments (ACKs), lead to an
ACK implosion
problem at the sender as the number of receivers increases. Briefly, the ACK implosion problem refers to the significant overhead incurred by the sending host due to the processing of ACKs from each receiver. A potential solution to this problem is to shift the burden of providing reliable data transfer to the receivers—thus resulting in a receiver-initiated multicast error control protocol based on the use of negative acknowledgments (NAKs). In this paper we determine the maximum throughputs of the sending and receiving hosts for generic sender-initiated and receiver-initiated protocols. We show that the receiver-initiated error control protocols provide substantially higher throughputs than their sender-initiated counterparts. We further demonstrate that the introduction of random delays prior to generating NAKs coupled with the multicasting of NAKs to all receivers has the potential for an additional substantial increase in the throughput of receiver-initiated error control protocols over sender-initiated protocols.
During the past ten years, the field of multiple-access communication has developed into a major area of both practical and theoretical interest within the field of eomputex communications. The multiple-access problem arises from the necessity of sharing a single communication channel among a community of distributed users. The distributed algorithm used by the stations to share the channel is known as the multiple-access protocol. In this paper we examine the multiple-access problem and various approaches to its resolution.In this survey we first define the multiple-access problem and then present the underlying issues and difficulties in achieving multiple-access communication. A taxonomy for multiple-access protocols is then developed in order to characterize common approaches and to provide a framework within which these protocols can be compared and contrasted. Different proposed protocols are then described and discussed, and aspects of their performance are examined. The use of multiple-access protocols for "realtime" or "time-constrained" communication applications, such as voice transmission, is examined next. Issues in time-constrained communication are identified, and recent work in the design of time-constrained multiple-access protocols is surveyed.
Computer
P u b l i s h e d b y t h e I E E E C o m p u t e r S o c i e t yLEAD establish an interactive closed loop between the forecast analysis and the instruments: The data drives the instruments, which, to make more accurate predictions, refocus in a repeated cycle.The "Hypothetical CASA-LEAD Scenario" sidebar provides an example of the unprecedented capabilities these changes afford.Mesoscale meteorology is the study of smaller-scale weather phenomena such as severe storms, tornadoes, and hurricanes. System-level science in this context involves the responsiveness of the forecast models to the weather at hand as well as conditions on the network at large and the large-scale computational resources on which forecasts rely. This responsiveness can be broken down into four narrowly defined goals:• Dynamic workflow adaptivity. Forecasts execute in the context of a workflow, or task graph. Workflows should be able to dynamically reconfigure in response to new events.• Dynamic resource allocation. The system should be able to dynamically allocate resources, including radars and remote observing technologies, to optiTwo closely linked projects aim to dramatically improve storm forecasting speed and accuracy. CASA is creating a distributed, collaborative, adaptive sensor network of lowpower, high-resolution radars that respond to user needs. LEAD offers dynamic workflow orchestration and data management in a Web services framework designed to support on-demand, real-time, dynamically adaptive systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.