Software-Defined Network (SDN) has become a promising network architecture in current days that provide network operators more control over the network infrastructure. The controller, also called as the operating system of the SDN, is responsible for running various network applications and maintaining several network services and functionalities. Despite all its capabilities, the introduction of various architectural entities of SDN poses many security threats and potential targets. Distributed Denial of Services (DDoS) is a rapidly growing attack that poses a tremendous threat to the Internet. As the control layer is vulnerable to DDoS attacks, the goal of this paper is to detect the attack traffic, by taking the centralized control aspect of SDN. Nowadays, in the field of SDN, various machine learning (ML) techniques are being deployed for detecting malicious traffic. Despite these works, choosing the relevant features and accurate classifiers for attack detection is an open question. For better detection accuracy, in this work, Support Vector Machine (SVM) is assisted by kernel principal component analysis (KPCA) with genetic algorithm (GA). In the proposed SVM model, KPCA is used for reducing the dimension of feature vectors, and GA is used for optimizing different SVM parameters. In order to reduce the noise caused by feature differences, an improved kernel function (N-RBF) is proposed. The experimental results show that compared to single-SVM, the proposed model achieves more accurate classification with better generalization. Moreover, the proposed model can be embedded within the controller to define security rules to prevent possible attacks by the attackers.
Fog computing has been prioritized over cloud computing in terms of latency-sensitive Internet of Things (IoT) based services. We consider a limited resource-based fog system where real-time tasks with heterogeneous resource configurations are required to allocate within the execution deadline. Two modules are designed to handle the real-time continuous streaming tasks. The first module is task classification and buffering (TCB), which classifies the task heterogeneity using dynamic fuzzy c-means clustering and buffers into parallel virtual queues according to enhanced least laxity time. The second module is task offloading and optimal resource allocation (TOORA), which decides to offload the task either to cloud or fog and also optimally assigns the resources of fog nodes using the whale optimization algorithm, which provides high throughput. The simulation results of our proposed algorithm, called whale optimized resource allocation (WORA), is compared with results of other models, such as shortest job first (SJF), multi-objective monotone increasing sorting-based (MOMIS) algorithm, and Fuzzy Logic based Real-time Task Scheduling (FLRTS) algorithm. When 100 to 700 tasks are executed in 15 fog nodes, the results show that the WORA algorithm saves 10.3% of the average cost of MOMIS and 21.9% of the average cost of FLRTS. When comparing the energy consumption, WORA consumes 18.5% less than MOMIS and 30.8% less than FLRTS. The WORA also performed 6.4% better than MOMIS and 12.9% better than FLRTS in terms of makespan and 2.6% better than MOMIS and 4.3% better than FLRTS in terms of successful completion of tasks.
In recent years, the growth rate of Cloud computing technology is increasing exponentially, mainly for its extraordinary services with expanding computation power, the possibility of massive storage, and all other services with the maintained quality of services (QoSs). The task allocation is one of the best solutions to improve different performance parameters in the cloud, but when multiple heterogeneous clouds come into the picture, the allocation problem becomes more challenging. This research work proposed a resource-based task allocation algorithm. The same is implemented and analyzed to understand the improved performance of the heterogeneous multi-cloud network. The proposed task allocation algorithm (Energy-aware Task Allocation in Multi-Cloud Networks (ETAMCN)) minimizes the overall energy consumption and also reduces the makespan. The results show that the makespan is approximately overlapped for different tasks and does not show a significant difference. However, the average energy consumption improved through ETAMCN is approximately 14%, 6.3%, and 2.8% in opposed to the random allocation algorithm, Cloud Z-Score Normalization (CZSN) algorithm, and multi-objective scheduling algorithm with Fuzzy resource utilization (FR-MOS), respectively. An observation of the average SLAviolation of ETAMCN for different scenarios is performed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.