Software-Defined Networking (SDN) is a new type of technology that embraces high flexibility and adaptability. The applications in SDN have the ability to manage and control networks while ensuring load balancing, access control, and routing. These are considered the most significant benefits of SDN. However, SDN can be influenced by several types of conflicting flows which may lead to deterioration in network performance in terms of efficiency and optimisation. Besides, SDN conflicts occur due to the impact and adjustment of certain features such as priority and action. Moreover, applying machine learning algorithms in the identification and classification of conflicting flows has limitations. As a result, this paper presents several machine learning algorithms that include Decision Tree (DT), Support Vector Machine (SVM), Extremely Fast Decision Tree (EFDT) and Hybrid (DT-SVM) for detecting and classifying conflicting flows in SDNs. The EFDT and hybrid DT-SVM algorithms were designed and deployed based on DT and SVM algorithms to achieve improved performance. Using a range flows from 1000 to 100000 with an increment of 10000 flows per step in two network topologies namely, Fat Tree and Simple Tree Topologies, that were created using the Mininet simulator and connected to the Ryu controller, the performance of the proposed algorithms was evaluated for efficiency and effectiveness across a variety of evaluation metrics. The experimental results of the detection of conflict flows show that the DT and SVM algorithms achieve accuracies of 99.27% and 98.53% respectively while the EFDT and hybrid DT-SVM algorithms achieve respective accuracies of 99.49% and 99.27%. In addition, the proposed EFDT algorithm achieves 95.73% accuracy on the task of classification between conflict flow types. The proposed EFDT and hybrid DT-SVM algorithms show a high capability of SDN applications to offer fast detection and classification of conflict flows.
Software Defined Networking (SDN) is an emerging networking paradigm that provides more flexibility and adaptability in terms of network definition and control. However, SDN is a logically centralized technology. Therefor the control plane (i.e. controller) scalability in SDN in particular, is also one of the problems that needs further focus. OpenFlow is one of the protocol standards in SDN, which allow the separation of the controller from the forwarding plane. The control plane has an SDN embedded firewall and is able to enforce and monitor the network activity. This firewall can be used to control the throughput. However, it may affect SDN performance. In this paper, throughput will be used as a performance metric to evaluate and assess the firewall impact on two protocols; Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) that passes through the forwarding planes. The evaluations have been verified through simulating the SDN OpenFlow network using MININET. The results show that an implementation of firewall module in SDN creates a significant 36% average drop for TCP and 87% average drop for UDP in the bandwidth which eventually affect the quality of the network and applications.
Application of cloud computing is rising substantially due to its capability to deliver scalable computational power. System attempts to allocate a maximum number of resources in a manner that ensures that all the service level agreements (SLAs) are maintained. Virtualization is considered as a core technology of cloud computing. Virtual machine (VM) instances allow cloud providers to utilize datacenter resources more efficiently. Moreover, by using dynamic VM consolidation using live migration, VMs can be placed according to their current resource requirements on the minimal number of physical nodes and consequently maintaining SLAs. Accordingly, non optimized and inefficient VMs consolidation may lead to performance degradation. Therefore, to ensure acceptable quality of service (QoS) and SLA, a machine learning technique with modified kernel for VMs live migrations based on adaptive prediction of utilization thresholds is presented. The efficiency of the proposed technique is validated with different workload patterns from Planet Lab servers.
Software defined network (SDN) is a network architecture in which the network traffic may be operated and managed dynamically according to user requirements and demands. Issue of security is one of the big challenges of SDN because different attacks may affect performance and these attacks can be classified into different types. One of the famous attacks is distributed denial of service (DDoS). SDN is a new networking approach that is introduced with the goal to simplify the network management by separating the data and control planes. However, the separation leads to the emergence of new types of distributed denial-of-service (DDOS) attacks on SDN networks. The centralized role of the controller in SDN makes it a perfect target for the attackers. Such attacks can easily bring down the entire network by bringing down the controller. This research explains DDoS attacks and the anomaly detection as one of the famous detection techniques for intelligent networks.
In today's rapidly growing communication and internet technologies, such as 5G, cloud computing, and blockchain, information security has become a critical component. When data is transmitted in its raw form, it is vulnerable to a variety of cybersecurity assaults. In this hybrid multi-stage data encryption architecture, which builds sequential and pseudo-random encoding/decoding algorithms with pre-stage text encryption discovered that image resolution and attributes were unaffected by the change in image size after testing several text sizes with the cover image and various image formats, it is suitable that the text size should be 15% smaller than the cover image. Furthermore, when compared to sequential encoding/decoding, the hybrid cryptography and steganography-pseudo-random encoding/decoding procedure is more efficient and time consuming.
The demand for high steady state network traffic utilization is growing exponentially. Therefore, traffic forecasting has become essential for powering greedy application and services such as the internet of things (IoT) and Big data for 5G networks for better resource planning, allocation, and optimization. The accuracy of forecasting modeling has become crucial for fundamental network operations such as routing management, congestion management, and to guarantee quality of service overall. In this paper, a hybrid network forecast model was analyzed; the model combines a non-linear auto regressive neural network (NARNN) and various smoothing techniques, namely, local regression (LOESS), moving average, locally weighted scatterplot smoothing (LOWESS), the Sgolay filter, Robyn loess (RLOESS), and robust locally weighted scatterplot smoothing (RLOWESS). The effects of applying smoothing techniques with varied smoothing windows were shown and the performance of the hybrid NARNN and smoothing techniques discussed. The results show that the hybrid model can effectively be used to enhance forecasting performance in terms of forecasting accuracy, with the assistance of the smoothing techniques, which minimized data losses. In this work, root mean square error (RMSE) is used as performance measures and the results were verified via statistical significance tests.
This paper presents a review of target detection and classification in forward scattering radar (FSR) which is a special state of bistatic radars, designed to detect and track moving targets in the narrow region along the transmitter-receiver base line. FSR has advantages and incredible features over other types of radar configurations. All previous studies proved that FSR can be used as an alternative system for ground target detection and classification. The radar and FSR fundamentals were addressed and classification algorithms and techniques were debated. On the other hand, the current and future applications and the limitations of FSR were discussed.
High-speed mobility system has now become a serious concern for mobile operators due to the large frameworks of a heterogeneous network made up of multiple cell types and different frequency bands. Handover (HO) is conducted in a real-life scenario when the user equipment (UE) moves from one network coverage to another by performing proper measurement with high speed. HO breakdown and call loss are observed due to a high speed; thus, high-speed mobility system needs improvement by using the UE speed as one of the key measurement monitoring criteria for the long-term evolution (LTE) network. Vendor consultation has been considered in this paper in addition to real drive test measurement in highways. Results have shown that velocity has a direct impact on the handover quality and overall timing. Results also demonstrate that 120 km/h measurement is better than 140 km/h as UE speed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.