State of the art, real-time, rate-adaptive, multimedia applications adjust their transmission rate to match the available network capacity. Unfortunately, this source-based rate-adaptation performs poorly in a heterogeneous multicast environment because there is no single target rate -the conflicting bandwidth requirements of all receivers cannot be simultaneously satisfied with one transmission rate. If the burden of rate-adaption is moved from the source to the receivers, heterogeneity is accommodated. One approach to receiver-driven adaptation is to combine a layered source coding algorithm with a layered transmission system. By selectively forwarding subsets of layers at constrained network links, each user receives the best quality signal that the network can deliver. We and others have proposed that selective-forwarding be carried out using multiple IP-Multicast groups where each receiver specifies its level of subscription by joining a subset of the groups. In this paper, we extend the multiple group framework with a rate-adaptation protocol called Receiver-driven Layered Multicast, or RLM. Under RLM, multicast receivers adapt to both the static heterogeneity of link bandwidths as well as dynamic variations in network capacity (i.e., congestion). We describe the RLM protocol and evaluate its performance with a preliminary simulation study that characterizes user-perceived quality by assessing loss rates over multiple time scales. For the configurations we simulated, RLM results in good throughput with transient short-term loss rates on the order of a few percent and long-term loss rates on the order of one percent. Finally, we discuss our implementation of a software-based Internet video codec and its integration with RLM.
Several recent proposals for an "active networks" architecture advocate the placement of user-defined computation within the network as a key mechanism to enable a wide range of new applications and protocols, including reliable multicast transports, mechanisms to foil denial of service attacks, intra-network real-time signal transcoding, and so forth. This laudable goal, however, creates a number of very difficult research problems, and although a number of pioneering research efforts in active networks have solved some of the preliminary small-scale problems, a large number of wide open problems remain. In this paper, we propose an alternative to active networks that addresses a restricted and more tractable subset of the active-networks design space. Our approach, which we (and others) call "active services", advocates the placement of userdefined computation within the network as with active networks, but unlike active networks preserves all of the routing and forwarding semantics of current Internet architecture by restricting the computation environment to the application layer. Because active services do not require changes to the Internet architecture, they can be deployed incrementally in today's Internet.We believe that many of the applications and protocols targeted by the active networks initiative can be solved with active services and, toward this end, we propose herein a specific architecture for an active service and develop one such service in detail -the Media Gateway (MeGa) service -that exploits this architecture. In defining our active service, we encountered six key problemsservice location, service control, service management, service attachment, service composition, and the definition of the service environment -and have crafted solutions for these problems in the context of the MeGa service. To verify our design, we implemented and fielded MeGa on the UC Berkeley campus, where it has been used regularly for several months by real users who connect via ISDN to an "on-line classroom". Our initial experience indicates that our active services prototype provides a very flexible and programmable platform for intra-network computation that strikes a good balance between the flexibility of the active networks architecture and the practical constraints of incremental deployment in the current Internet.
"Soft state" is an often cited yet vague concept in network protocol design in which two or more network entities intercommunicate in a loosely coupled, often anonymous fashion. Researchers often define this concept operationally (if at all) rather than analytically: a source of soft state transmits periodic "refresh messages" over a (lossy) communication channel to one or more receivers that maintain a copy of that state, which in turn "expires" if the periodic updates cease. Though a number of crucial Internet protocol building blocks are rooted in soft state-based designs --- e.g., RSVP refresh messages, PIM membership updates, various routing protocol updates, RTCP control messages, directory services like SAP, and so forth --- controversy is building as to whether the performance overhead of soft state refresh messages justify their qualitative benefit of enhanced system "robustness". We believe that this controversy has risen not from fundamental performance tradeoffs but rather from our lack of a comprehensive understanding of soft state. To better understand these tradeoffs, we propose herein a formal model for soft state communication based on a probabilistic delivery model with relaxed reliability. Using this model, we conduct queueing analysis and simulation to characterize the data consistency and performance tradeoffs under a range of workloads and network loss rates. We then extend our model with feedback and show, through simulation, that adding feedback dramatically improves data consistency (by up to 55%) without increasing network resource consumption. Our model not only provides a foundation for understanding soft state, but also induces a new fundamental transport protocol based on probabilistic delivery. Toward this end, we sketch our design of the "Soft State Transport Protocol" (SSTP), which enjoys the robustness of soft state while retaining the performance benefit of hard state protocols like TCP through its judicious use of feedback.
There is widespread agreement on the need for architectural change in the Internet, but very few believe that current ISPs will ever effect such changes. In this paper we ask what makes an architecture evolvable, by which we mean capable of gradual change led by the incumbent providers. This involves both technical and economic issues, since ISPs have to be able, and incented, to offer new architectures. Our study suggests that, with very minor modifications, the current Internet architecture could be evolvable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.