The proportional differentiation model provides the network operator with the 'tuning knobs' for adjusting the per-hop quality-of-service (QoS) ratios between classes, independent of the class loads. This paper applies the proportional model in the differentiation of queueing delays, and investigates appropriate packet scheduling mechanisms. Starting from the proportional delay differentiation (PDD) model, we derive the average queueing delay in each class, show the dynamics of the class delays under the PDD constraints, and state the conditions in which the PDD model is feasible. The feasibility model of the model can be determined from the average delays that result with the strict priorities scheduler. We then focus on scheduling mechanisms that can implement the PDD model, when it is feasible to do so. The proportional average delay (PAD) scheduler meets the PDD constraints, when they are feasible, but it exhibits a pathological behavior in short timescales. The waiting time priority (WTP) scheduler, on the other hand, approximates the PDD model closely, even in the short timescales of a few packet departures, but only in heavy load conditions. PAD and WTP serve as motivation for the third scheduler, called hybrid proportional delay (HPD). HPD approximates the PDD model closely, when the model is feasible, independent of the class load distribution. Also, HPD provides predictable delay differentiation even in short timescales.Index Terms-Dynamic priorities, quality of service, resource management algorithms.
Abstract-The packet pair technique estimates the capacity of a path (bottleneck bandwidth) from the dispersion (spacing) experienced by two back-to-back packets [1][2][3]. We demonstrate that the dispersion of packet pairs in loaded paths follows a multimodal distribution, and discuss the queueing effects that cause the multiple modes. We show that the path capacity is often not the global mode, and so it cannot be estimated using standard statistical procedures. The effect of the size of the probing packets is also investigated, showing that the conventional wisdom of using maximum sized packet pairs is not optimal. We then study the dispersion of long packet trains. Increasing the length of the packet train reduces the measurement variance, but the estimates converge to a value, referred to as Asymptotic Dispersion Rate (ADR), that is lower than the capacity. We derive the effect of the cross traffic in the dispersion of long packet trains, showing that the ADR is not the available bandwidth in the path, as was assumed in previous work. Putting all the pieces together, we present a capacity estimation methodology that has been implemented in a tool called pathrate.
The paper then proposes a priority-based policy for scheduling N such streams on a single server to reduce the probability of dynamic failure. The basic idea is to assign higher priorities to customers from streams that are closer to a dynamic failure so as to improve their chances of meeting their deadlines. The paper proposes a heuristic for assigning these priorities. The effectiveness of this approach is evaluated through simulation under various customer arrival and service patterns. The scheme is compared to a conventional scheme where all customers are serviced at the same priority level and to an imprecise computation model approach. The evaluation shows that substantial reductions in the probability of dynamic failure are achieved when the proposed policy is used.
Collaboration in sensor networks must be fault-tolerant due to the harsh environmental conditions in which such networks can be deployed. This paper focuses on finding algorithms for collaborative target detection that are efficient in terms of communication cost, precision, accuracy, and number of faulty sensors tolerable in the network. Two algorithms, namely, value fusion and decision fusion, are identified first. When comparing their performance and communication overhead, decision fusion is found to become superior to value fusion as the ratio of faulty sensors to fault free sensors increases. As robust data fusion requires agreement among nodes in the network, an analysis of fully distributed and hierarchical agreement is also presented. The impact of hierarchical agreement on communication cost and system failure probability is evaluated and a method for determining the number of tolerable faults is identified.
Internet applications and users have very diverse quality of service expectations, making the same-service-to-all model of the current Internet inodequate and limiting. There is a widespread consensus today that the Internet architecture has to be extended with service differentiation mechanisms so that certain users and applications can get better service than others at a higher cost. One approach, referred to as absolute differentiated services, is based on sophisticated admission control and resource reservation mechanisms in order to provide guarantees or statistical assurances for absolute performance measures, such as a minimum service rate or maximum end-to-end d e l a y . Another approach, which i s simpler i n terms of implementation, deployment, and network manageability, is to offer relative differentiated services between a small number of service classes. These classes are ordered based on their pocket forwording quality, in terms of per-hop metrics for the queuing delays and packet losses, giving the assurance that higher classes are better than lower classes. The applications and users, in this context, can d namipriori guarantees for the actual performance level of each cfass. The relative differentiation approach can be further refined and quantified using the proportional differentiation model. This model aims to provide the network operator with the "tuning knobs" for adjusting the quality spacing between classes, independent of the class loads. When this spacin is feasible in short timescales, it can lead to redictable and controllable class jifferentiation, which are two important features E r any relative differentiation model. The proportional differentiation model can be approximated in practice with simple forwardin mechanisms (packet scheduling and buffer management) that we briefly describetere.cally select the class that best meets their quality and pricin constraints, wit I out a he Internet i s being uscd by business and user communities with widely varicd scrvicc cxpcctations from the network infrastructure. For example, many companies rcly on thc Intcrnct for the day-to-day management of their global enterprise. These companics are willing to pay a substantially higher cost for the best possible service level from the Internet. Similarly, there are many uscrs who are willing to pay a higlicr Internet access fee in order to make u s e o f demanding applications, such as IP telephony and vidcoconfcrcncing. A t the same time, there are millions of users who want to pay as little as possible for more elemcntary scrviccs, likc cxchanging e-mails and/or surfing the Web.In addition to this variety of user expectations, there has also been a rapid evolution in thc set of Intcrnct applications. A few years ago the key Intcrnct applications wcrc only e-mail, ftp, or newsgroups. I n contrast, the present-day Internet applications have widely diversc servicc necds because thcy transfcr a wide range of information types, including voice, music, video, graphics, Java scripts, and hypcrt...
AbstractÐTasks in a real-time control application are usually periodic and they have deadline constraints by which each instance of a task is expected to complete its computation, even in the adverse circumstances caused by component failures. Techniques to recover from processor failures often involve a reconfiguration in which all tasks are assigned to fault-free processors. This reconfiguration may result in processor overload where it is no longer possible to meet the deadlines of all tasks. In this paper, we discuss an overload management technique which discards selected task instances in such a way that the performance of the control loops in the system remain satisfactory even after a failure. The technique is based on the rationale that real-time control applications can tolerate occasional misses of the control law updates, especially if the control law is modified to account for these missed updates. The paper devises a scheduling policy which deterministically guarantees when and where the misses will occur. The paper also proposes a methodology for modifying the control law to minimize the deterioration in the control system behavior as a result of these missed control law updates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.