For many years the research community has attempted to model the Internet in order to better understand its behaviour and improve its performance. Since much of the structural complexity of the Inter- net is due to its multilevel operation, the Internet's multilevel nature is an important and non-trivial feature that researchers must consider when developing appropriate models. In this paper, we compare the normalised Laplacian spectra of physical-and logical-level topologies of four commercial ISPs and two research networks against the US freeway topology, and show analytically that physical level communication networks are structurally similar to the US freeway topology. We also generate synthetic Gabriel graphs of physical topologies and show that while these synthetic topologies capture the grid-like structure of actual topologies, they are more expensive than the actual physical level topologies based on a network cost model. Moreover, we introduce a distinction between geographic graphs that include degree-2 nodes needed to capture the geographic paths along which physical links follow, and structural graphs that eliminate these degree-2 nodes and capture only the interconnection properties of the physical graph and its multilevel relationship to logical graph overlays. Furthermore, we develop a multilevel graph evaluation framework and analyse the resilience of single and multilevel graphs using the flow robustness metric. We then confirm that dynamic routing performed over the lower levels helps to improve the performance of a higher level service, and that adaptive challenges more severely impact the performance of the higher levels than non-adaptive challenges.
Abstract-Understanding and modelling the Internet has been a major research challenge in part due to the complexity of the interaction among its protocols and in part due to multilevel, multidomain topological structure. It is therefore crucial to properly analyse each structural level of the Internet to gain a better understanding, as well as to improve its resilience properties. In this paper, first we present the physical and logical topologies of two ISPs and compare these topologies with the US interstate highway topology by using graph metrics and then using the normalised Laplacian spectrum. Our results indicate that physical network topologies are closely correlated with the motorway transportation topology. Finally, we study the spectral properties of various communication networks and observe that the spectral radius of the normalised Laplacian matrix is a good indicator of graph connectivity when comparing different size and order graphs.
Graph robustness metrics have been used largely to study the behavior of communication networks in the presence of targeted attacks and random failures. Several researchers have proposed new graph metrics to better predict network resilience and survivability against such attacks. Most of these metrics have been compared to a few established graph metrics for evaluating the effectiveness of measuring network resilience. In this paper, we perform a comprehensive comparison of the most commonly used graph robustness metrics. First, we show how each metric is determined and calculate its values for baseline graphs. Using several types of random graphs, we study the accuracy of each robustness metric in predicting network resilience against centrality-based attacks. The results show three conclusions. First, our path diversity metric has the highest accuracy in predicting network resilience for structured baseline graphs. Second, the variance of node-betweenness centrality has mostly the best accuracy in predicting network resilience for Waxman random graphs. Third, path diversity, network criticality, and effective graph resistance have high accuracy in measuring network resilience for Gabriel graphs.
Software-defined Networking (SDN) has recently developed and been put forward as a promising and encouraging solution for future internet architecture. Managed, the centralized and controlled network has become more flexible and visible using SDN. On the other hand, these advantages bring us a more vulnerable environment and dangerous threats, causing network breakdowns, systems paralysis, online banking frauds and robberies. These issues have a significantly destructive impact on organizations, companies or even economies. Accuracy, high performance and real-time systems are essential to achieve this goal successfully. Extending intelligent machine learning algorithms in a network intrusion detection system (NIDS) through a software-defined network (SDN) has attracted considerable attention in the last decade. Big data availability, the diversity of data analysis techniques, and the massive improvement in the machine learning algorithms enable the building of an effective, reliable and dependable system for detecting different types of attacks that frequently target networks. This study demonstrates the use of machine learning algorithms for traffic monitoring to detect malicious behavior in the network as part of NIDS in the SDN controller. Different classical and advanced tree-based machine learning techniques, Decision Tree, Random Forest and XGBoost are chosen to demonstrate attack detection. The NSL-KDD dataset is used for training and testing the proposed methods; it is considered a benchmarking dataset for several state-of-the-art approaches in NIDS. Several advanced preprocessing techniques are performed on the dataset in order to extract the best form of the data, which produces outstanding results compared to other systems. Using just five out of 41 features of NSL-KDD, a multi-class classification task is conducted by detecting whether there is an attack and classifying the type of attack (DDoS, PROBE, R2L, and U2R), accomplishing an accuracy of 95.95%.
A smart city is a geographical area that uses modern technologies to facilitate the lives of its residents. Wireless sensor networks (WSNs) are important components of smart cities. Deploying IoT sensors in WSNs is a challenging aspect of network design. Sensor deployment is performed to achieve objectives like increasing coverage, strengthening connectivity, improving robustness, or increasing the lifetime of a given WSN. Therefore, a sensor deployment method must be carefully designed to achieve such objective functions without exceeding the available budget. This study introduces a novel deployment algorithm, called the Evaluated Delaunay Triangulation-based Deployment for Smart Cities (EDTD-SC), which targets not only sensor distribution, but also sink placement. Our algorithm utilizes Delaunay triangulation and k-means clustering to find optimal locations to improve coverage while maintaining connectivity and robustness with obstacles existence in sensing area. The EDTD-SC has been applied to real-world areas and cities, such as Midtown Manhattan in New York in the United States of America. The results show that the EDTD-SC outperforms random and regular deployments in terms of area coverage and end-to-end-delay by 29.6% and 29.7%, respectively. Further, it exhibits significant performance in terms of resilience to attacks.
Traditional IP networks are difficult to manage, owing to their rapid expansion and dynamic changes. Software‐defined networks are introduced to simplify network management by separating the network control plane from the packet forwarding plane. Using one or several controllers, SDN switches can be configured to forward data packets to their destinations. The controller placement problem aims to determine the number of controllers and their locations to meet network service requirements. Early approaches used the k‐median and k‐center algorithms, which select k controllers to minimize propagation latency without considering network resilience. In this paper, we developed a new nodal metric, nodal disjoint path (NDP), which measures a node's importance in terms of its diverse connectivity to other nodes. Based on NDP, we propose two algorithms, NDP‐global and NDP‐cluster, for determining the locations of the k controllers to increase network robustness against targeted attacks. We apply the two selection algorithms to four US‐based fiber‐level networks and evaluate their resilience against five centrality‐based attacks and random failures. The evaluation results indicate that selecting controllers by the NDP‐global algorithm, compared with the NDP‐cluster, k‐median, and k‐center algorithms, provides better network resilience in the face of centrality‐based attacks and random failures. The results also indicate that the NDP‐cluster algorithm has a delay performance comparable to that of the k‐median algorithm and provides higher network resilience.
Software-defined networking (SDN) has been developed to separate network control plane from forwarding plane which can decrease operational costs and the time it takes to deploy new services compared to traditional networks. Despite these advantages, this technology brings threats and vulnerabilities. Consequently, developing high-performance real-time intrusion detection systems (IDSs) to classify malicious activities is a vital part of SDN architecture. This article introduces two created datasets generated from SDN using Mininet and Ryu controller with different feature extraction tools that contain normal traffic and different types of attacks (Fin flood, UDP flood, ICMP flood, OS probe scan, port probe scan, TCP bandwidth flood, and TCP syn flood) that is used for training a number of supervised binary classification machine learning algorithms such as k-nearest neighbor, AdaBoost, decision tree (DT), random forest, naive Bayes, multilayer perceptron, support vector machine, and XGBoost. The DT algorithm has achieved high scores to fit a real-time application achieving F1 score on attack class of 0.9995, F1 score on normal class of 0.9983, and throughput score of 6,737,147.275 samples per second with a total number of three features. In addition, using data preprocessing to reduce the model complexity, thereby increasing the overall throughput to fit a real-time system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.