SLICED PROGRAMMABLE NETWORKSOpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover.Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor's flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network 1 at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.
Now that our smartphones have multiple interfaces (WiFi, 3G, 4G, etc.), we have preferences for which interfaces an application may use. We may prefer to stream video over WiFi because it is fast, but VoIP over 3G because it gives continued connectivity. We also have relative preferences, such as giving Netflix twice as much capacity as Dropbox. This means our mobile devices need to schedule packets in keeping with our preferences while making use of all the capacity available. This is the natural domain of fair queuing, and this paper is about the design of a packet scheduler to meet these requirements. We show that traditional fair queueing schedulers cannot take into account a user's preferences for some interfaces over others. We present a novel packet scheduler called miDRR that meets our needs by generalizing DRR for multiple interfaces. We demonstrate a prototype running in Linux and show that it works correctly and can easily run at the speeds we need.
Abstract:We demonstrate a converged OpenFlow enabled packet-circuit network, where circuit flow properties (guarantee d bandwidth, low latency, low jitter, bandwidth-on-demand, fast recovery) provide differential treatment to dynamically aggregated packet flows for voice, video and web traffic.
Poor connectivity is common when we use wireless networks on the go. A natural way to tackle the problem is to take advantage of the multiple network interfaces on our mobile devices, and use all the networks around us. Using multiple networks at a time makes makes possible faster connections, seamless connectivity and potentially lower usage charges. The goal of this paper is to explore how to make use of all the networks with today's technology. Specifically, we prototyped a solution on an Android phone. Using our prototype, we demonstrate the benefits (and difficulties) of using multiple networks at the same time.
Current networking stacks were designed for a single wired network interface. Today, it is common for a mobile device connect to many networks that come and go, and whose rates are constantly changing. Current network stacks behave poorly in this environment because they commit an outgoing packet to a particular interface too early, making it hard to back out when network conditions change. By default, Linux will drop over 1,000 packets when a mobile client associates to a new WiFi network. In this paper, we introduce the concept of late-binding packets to their outgoing interfaces. Prior to the binding point different flows are kept separate, to prevent unnecessarily delaying latency-sensitive traffic. After the binding point buffers are minimizedin our design, down to just two packets-to minimize loss when network conditions change. We designed and implemented a late-binding Linux networking stack that empirically demonstrates the value of our proposition in minimizing delay of latency-sensitive packets and packet loss when networks come and go.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.