Abstract-The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-Defined Networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution.In this paper we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound APIs, network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms -with a focus on aspects such as resiliency, scalability, performance, security and dependability -as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.
This paper presents a new publicly available dataset from GÉANT, the European Research and Educational Network. This dataset consists of traffic matrices built using full IGP routing information, sampled Netflow data and BGP routing information of the GÉANT network, one per 15 minutes interval for several months. Potential benefits of publicly available traffic matrices comprise improving our understanding of real traffic matrices, their dynamics, and to make possible the benchmarking of intradomain traffic engineering methods. Categories and Subject Descriptors MOTIVATIONA lot of effort has been put the last few years on trying to infer traffic matrices based on SNMP link counts [1,2,3,4]. The approach of relying on the raw traffic demand [5,6] is rarely used as the burden of the measurement and storage infrastructure is significant [7]. Still, recent works [8,9] indicate that obtaining precise traffic matrices is not out of reach.Contrary to single capture points traffic traces [10,11,12] or BGP routing data [13,14] for which numerous publicly available datasets exist, publicly available traffic matrices coming from a real network are rare. The only publicly available set of traffic matrices to our knowledge is at http://www.cs.utexas.edu/ yzhang/research/AbileneTM/ based on data from the Abilene network. Developing intradomain traffic engineering tools or traffic matrix modeling require real datasets to validate the tools or the models. Without publicly available datasets, no comparisons with alternative techniques or models can be performed. To contribute to filling this lack in the networking community, this paper presents a publicly available dataset consisting of intradomain THE GÉANT NETWORKGÉANT is the pan-European research network and it is operated by DANTE. It carries research traffic from the European National Research and Education Networks (NRENs) connecting universities and research institutions. GÉANT has a PoP in each European country 3 . All the routers of GÉANT are border routers. GÉANT is composed of 23 routers interconnected using 38 links. In addition, GÉANT has 53 links with other domains. GÉANT uses ISIS to compute its intradomain routes. The IGP weights of GÉANT are mainly based on the inverse of the link capacities with some manual tunings. We obtained a libpcap trace of ISIS for the purpose of building a model of the GÉANT topology.In order to build an accurate model of GÉANT suitable for the computation of its intradomain traffic matrices, we also obtained from DANTE the interdomain routes known by GÉANT as well as a trace of the traffic transiting accross GÉANT [15]. The interdomain routes are obtained from BGP and the traffic trace is collected using Netflow. We describe these two datasets in the following paragraphs. BGP Routing dataIn GÉANT, the BGP routes are collected using a dedicated workstation running GNU Zebra [16], a software implementation of different routing protocols including BGP. The workstation has an iBGP session with all the border routers of the networ...
Abstract. Recent efforts in software-defined networks, such as OpenFlow, give unprecedented access into the forwarding plane of networking equipment. When building a network based on OpenFlow however, one must take into account the performance characteristics of particular OpenFlow switch implementations. In this paper, we present OFLOPS, an open and generic software framework that permits the development of tests for OpenFlow-enabled switches, that measure the capabilities and bottlenecks between the forwarding engine of the switch and the remote control application. OFLOPS combines hardware instrumentation with an extensible software framework. We use OFLOPS to evaluate current OpenFlow switch implementations and make the following observations: (i) The switching performance of flows depends on applied actions and firmware. (ii) Current OpenFlow implementations differ substantially in flow updating rates as well as traffic monitoring capabilities. (iii) Accurate OpenFlow command completion can be observed only through the data plane. These observations are crucial for understanding the applicability of OpenFlow in the context of specific use-cases, which have requirements in terms of forwarding table consistency, flow setup latency, flow space granularity, packet modification types, and/or traffic monitoring abilities.
The largest IXPs carry on a daily basis traffic volumes in the petabyte range, similar to what some of the largest global ISPs reportedly handle. This little-known fact is due to a few hundreds of member ASes exchanging traffic with one another over the IXP's infrastructure. This paper reports on a first-of-its-kind and in-depth analysis of one of the largest IXPs worldwide based on nine months' worth of sFlow records collected at that IXP in 2011.A main finding of our study is that the number of actual peering links at this single IXP exceeds the number of total AS links of the peer-peer type in the entire Internet known as of 2010! To explain such a surprisingly rich peering fabric, we examine in detail this IXP's ecosystem and highlight the diversity of networks that are members at this IXP and connect there with other member ASes for reasons that are similarly diverse, but can be partially inferred from their business types and observed traffic patterns. In the process, we investigate this IXP's traffic matrix and illustrate what its temporal and structural properties can tell us about the member ASes that generated the traffic in the first place. While our results suggest that these large IXPs can be viewed as a microcosm of the Internet ecosystem itself, they also argue for a re-assessment of the mental picture that our community has about this ecosystem.
The most widely used technique for IP geolocation consists in building a database to keep the mapping between IP blocks and a geographic location. Several databases are available and are frequently used by many services and web sites in the Internet. Contrary to widespread belief, geolocation databases are far from being as reliable as they claim. In this paper, we conduct a comparison of several current geolocation databases -both commercial and free-to have an insight of the limitations in their usability.First, the vast majority of entries in the databases refer only to a few popular countries (e.g., U.S.). This creates an imbalance in the representation of countries across the IP blocks of the databases. Second, these entries do not reflect the original allocation of IP blocks, nor BGP announcements. In addition, we quantify the accuracy of geolocation databases on a large European ISP based on ground truth information. This is the first study using a ground truth showing that the overly fine granularity of database entries makes their accuracy worse, not better. Geolocation databases can claim country-level accuracy, but certainly not city-level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.