This work is motivated by the desire to build a very high speed packet-switch with extremely high linerates. In this work, we consider building a packet-switch from multiple, lower speed packet-switches operating independently and in parallel. In particular, we consider a (perhaps obvious) parallel packet switch (PPS) architecture in which arriving traffic is demultiplexed over identical, lower speed packet-switches, switched to the correct output port, then recombined (multiplexed) before departing from the system. Essentially, the packet-switch performs packet-by-packet load-balancing, or "inverse-multiplexing" over multiple independent packet-switches. Each lower-speed packet switch, operates at a fraction of the line-rate, ; for example, if each packet-switch operates at rate no memory buffers are required to operate at the full line-rate of the system. Ideally, a PPS would share the benefits of an output-queued switch; i.e. the delay of individual packets could be precisely controlled, allowing the provision of guaranteed qualities of service.We ask the question: Is it possible for a PPS to precisely emulate the behavior of an output-queued packetswitch with the same capacity and with the same number of ports? In chapter 3, we prove that it is theoretically possible for a PPS to emulate a FCFS output-queued packet-switch if each layer operates at a rate of approximately . This simple result is analogous to Clos' theorem for a three-stage circuit switch to be strictly non-blocking. We further show that the PPS can emulate any QoS queueing discipline if each layer operates at a rate of approximately . In chapter 4, we show that it is possible to mimic a multicast FIFO OQ switch with each layer operating at a rate of , where is the total number of ports in the PPS.However, our result appears to require a centralized scheduling algorithm with unreasonable processing and communication complexity. And so we consider a distributed approach which maintains work conservance in chapter 5. Finally, we relax the requirements on a PPS, and show in chapter 6, that it is possible to mimic a FIFO OQ switch within a specific delay bound with no speedup. The speedup can be as low as one if we use a fixed sized buffer in the inputs and outputs of a PPS.
The main goal of a congestion avoidance algorithm is to maximize throughput and minimize delay (Jain & Ramakrishnan 1988). While TCP Reno achieves high throughput, it tends to consume all of the buffer space at the bottleneck router, causing large delays. In this paper we propose a simple scheme that modifies TCP Reno's congestion avoidance algorithm by throttling back the opening of the congestion window once an increase in round-trip time is perceived. We call the scheme TCP-BFA and have implemented it in the ns network simulator and in BSD 4.4. We show through simulations and measurements of real traffic on the Internet that TCP-BFA results in lower router buffer occupancies and lower delays while maintaining a throughput similar to that of TCP Reno. The advantages of TCP-BFA are (1) smaller router buffer size requirements, (2) an order of magnitude improvement in network power (the ratio of throughput to delay), (3) fewer packet losses, (4) faster detection of multiple losses due to lower retransmission timeout estimates, and (5) smoother traffic patterns.
Today most multiplayer game servers are pre-located statically, which makes it hard for gamers to find equi-ping hosts for their matches. This is especially important for first person shooter games (FPS), which are a class of interactive games that is very sensitive to difference in ping between the participants and the hosting server. In this paper we present a novel solution, which builds on top of the classic operating systems concept of a virtual machine monitor (VMM). A VMM allows us to encapsulate the state of the game server in a virtual machine file, which could then be activated on any real machine running the VMM software. The main advantage of this solution is mainly backward compatibility, that is we can take any existing FPS game and migrate it to this platform without any code changes to the game client nor the server. Another advantage is the economies of scale for such a network since it can be shared between different games. We describe our vMatrix framework and address how to move the virtual machine game servers across the real-machines to minimize the difference in ping between all participants of a given match. We also demonstrate this solution using Microsoft's popular Halo PC game, we show that this solution does not degrade the game performance and does not require any code changes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.