We consider the maximization of network throughput in buffer-constrained optical networks using aggregate bandwidth allocation and reservation-based transmission control. Assuming that all flows are subject to loss-based TCP congestion control, we quantify the effects of buffer capacity constraints on bandwidth utilization efficiency through contention-induced packet loss. The analysis shows that the ability of TCP flows to efficiently utilize successful reservations is highly sensitive to the available buffer capacity. Maximizing the bandwidth utilization efficiency under buffer capacity constraints thus requires decoupling packet loss from contention-induced blocking of transmission requests. We describe a confirmed (two-way) reservation scheme that eliminates contention-induced loss, so that no packets are dropped at the network's core, and loss can be incurred only at the adequately buffer-provisioned ingress routers, where it is exclusively congestion-induced. For the confirmed signaling scheme, analytical and simulation results indicate that TCP aggregates are able to efficiently utilize the successful reservations independently of buffer constraints.
Abstract-We consider the problem of packet scheduling in a network with small router buffers. The objective is to provide a statistical bound on the worst-case packet loss rate for a traffic aggregate (connection) routed along any network path, given a maximum permissible link utilization (load). This problem is argued to be of interest in networks providing statistical loss-rate guarantees to ingress-egress connections with fixed bandwidth demands.We introduce a scheduling algorithm for networks using perpacket transmission reservation. Reservations allow loss guarantees at the aggregate level to hold for individual flows within the aggregate. The algorithm employs randomization and traffic regulation at the ingress, and batch local scheduling at the links. It ensures that a large fraction of packets from each connection are consistently subject to small loss probability at every link. These packets are therefore likely to survive long paths.To obtain the desired loss-rate bound, we analyze the performance of the algorithm under global routing and bandwidth allocation scenarios that maximize the loss rate of a connection routed along an arbitrary network path. We compare the bound to that obtained using the scheduling algorithm that combines the FCFS service discipline and the drop-tail policy. We find that the proposed algorithm significantly improves the constraints on link utilization and path length necessary to achieve strong loss-rate guarantees.
Emulation of Output Queuing (OQ) switches using Combined Input-Output Queuing (CIOQ) switches has been studied extensively in the setting where the switch buffers have unlimited capacity. In this paper we study the general setting where the OQ switch and the CIOQ switch have finite buffer capacity B ≥ 1 packets at every output. We analyze the resource requirements of CIOQ policies in terms of the required fabric speedup and the additional buffer capacity needed at the CIOQ inputs: A CIOQ policy is said to be (s, b)-valid (for OQ emulation) if a CIOQ employing this policy can emulate an OQ switch using fabric speedup s ≥ 1, and without exceeding buffer occupancy b at any input port.For the family of work-conserving scheduling algorithms, we find that whereas every greedy CIOQ policy is valid at speedup B, no CIOQ policy is valid at speedup s < 3 √ B − 2 when preemption is allowed. We also find that CCF in particular is not valid at any speedup s < B. We then introduce a CIOQ policy, CEH, that is valid at speedup s ≥ p 2(B − 1). Under CEH, the buffer occupancy at any input never exceeds 1 + j B−1 s−1 k . Although the speedup required for the emulation of preemptive scheduling algorithms is not constant, it may be feasible in high-speed electronic or optical switches, which are expected to have limited buffering capacity.For non-preemptive scheduling algorithms, we characterize a trade-off between the CIOQ speedup and the input buffer occupancy. Specifically, we show that for any greedy policy that is valid at speedup s > 2, the input buffer occupancy cannot exceed 1 + l B−1 s−2 m . We also show that a greedy variant of the CCF policy is (2, B)-valid for the emulation of non-preemptive OQ algorithms with PIFO service disciplines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.