The need for timely identification of Distributed Denial-of-Service (DDoS) attacks in the Internet of Things (IoT) has become critical in minimizing security risks as the number of IoT devices deployed rapidly grows globally and the volume of such attacks rises to unprecedented levels. Instant detection facilitates network security by speeding up warning and disconnection from the network of infected IoT devices, thereby preventing the botnet from propagating and thereby stopping additional attacks. Several methods have been developed for detecting botnet attacks, such as Swarm Intelligence (SI) and Evolutionary Computing (EC)-based algorithms. In this study, we propose a Local-Global best Bat Algorithm for Neural Networks (LGBA-NN) to select both feature subsets and hyperparameters for efficient detection of botnet attacks, inferred from 9 commercial IoT devices infected by two botnets: Gafgyt and Mirai. The proposed Bat Algorithm (BA) adopted the local-global best-based inertia weight to update the bat’s velocity in the swarm. To tackle with swarm diversity of BA, we proposed Gaussian distribution used in the population initialization. Furthermore, the local search mechanism was followed by the Gaussian density function and local-global best function to achieve better exploration during each generation. Enhanced BA was further employed for neural network hyperparameter tuning and weight optimization to classify ten different botnet attacks with an additional one benign target class. The proposed LGBA-NN algorithm was tested on an N-BaIoT data set with extensive real traffic data with benign and malicious target classes. The performance of LGBA-NN was compared with several recent advanced approaches such as weight optimization using Particle Swarm Optimization (PSO-NN) and BA-NN. The experimental results revealed the superiority of LGBA-NN with 90% accuracy over other variants, i.e., BA-NN (85.5% accuracy) and PSO-NN (85.2% accuracy) in multi-class botnet attack detection.
Particle swarm optimization (PSO) algorithm is a population-based intelligent stochastic search technique used to search for food with the intrinsic manner of bee swarming. PSO is widely used to solve the diverse problems of optimization. Initialization of population is a critical factor in the PSO algorithm, which considerably influences the diversity and convergence during the process of PSO. Quasirandom sequences are useful for initializing the population to improve the diversity and convergence, rather than applying the random distribution for initialization. The performance of PSO is expanded in this paper to make it appropriate for the optimization problem by introducing a new initialization technique named WELL with the help of low-discrepancy sequence. To solve the optimization problems in large-dimensional search spaces, the proposed solution is termed as WE-PSO. The suggested solution has been verified on fifteen well-known unimodal and multimodal benchmark test problems extensively used in the literature, Moreover, the performance of WE-PSO is compared with the standard PSO and two other initialization approaches Sobol-based PSO (SO-PSO) and Halton-based PSO (H-PSO). The findings indicate that WE-PSO is better than the standard multimodal problem-solving techniques. The results validate the efficacy and effectiveness of our approach. In comparison, the proposed approach is used for artificial neural network (ANN) learning and contrasted to the standard backpropagation algorithm, standard PSO, H-PSO, and SO-PSO, respectively. The results of our technique has a higher accuracy score and outperforms traditional methods. Also, the outcome of our work presents an insight on how the proposed initialization technique has a high effect on the quality of cost function, integration, and diversity aspects.
The coronavirus disease (COVID-19) outbreak, which began in December 2019, has claimed numerous lives and impacted all aspects of human life. COVID-19 was deemed an outbreak by the World Health Organization (WHO) as time passed, putting a tremendous strain on substantially all countries, particularly those with poor health services and delayed reaction times. This recently identified virus is highly contagious. Controlling the rapid spread of this infection requires early detection of infected people through comprehensive screening. For COVID-19 viral diagnosis and follow-up, chest radiography imaging is an excellent tool. Deep learning (DL) has been used for a variety of healthcare purposes, including diabetic retinopathy detection, image classification, and thyroid diagnosis. DL is a useful strategy for combating the COVID-19 outbreak because there are so many streams of medical images (e.g., X-rays, CT, and MRI). In this study, we used the benchmark chest X-ray scan (CXRS) dataset for both COVID-19-infected and noninfected patients. We evaluate the results of DL-based convolutional neural network (CNN) models after preprocessing the scans and using data augmentation. Transfer learning (TL) is used to improve the algorithm’s classification performance for chest radiography imaging. Finally, features of the attention and feature interweave modules are combined to create a more accurate feature map. The architecture is trained for COVID-19 CXRS using CNN, and the newly generated feature layer is applied to TL architecture. The experimental results found that training enhances the CNN + TL algorithm’s ability to classify CXRS with an overall detection accuracy of 99.3%, precision (0.97), recall (0.98), f-measure (0.98), and receiver operating characteristic (ROC) curve (area = 0.97). The results show that further training improves the classification architecture’s performance by 99.3%.
Breast cancer is a dangerous disease with a high morbidity and mortality rate. One of the most important aspects in breast cancer treatment is getting an accurate diagnosis. Machine-learning (ML) and deep learning techniques can help doctors in making diagnosis decisions. This paper proposed the optimized deep recurrent neural network (RNN) model based on RNN and the Keras–Tuner optimization technique for breast cancer diagnosis. The optimized deep RNN consists of the input layer, five hidden layers, five dropout layers, and the output layer. In each hidden layer, we optimized the number of neurons and rate values of the dropout layer. Three feature-selection methods have been used to select the most important features from the database. Five regular ML models, namely decision tree (DT), support vector machine (SVM), random forest (RF), naive Bayes (NB), and K-nearest neighbor algorithm (KNN) were compared with the optimized deep RNN. The regular ML models and the optimized deep RNN have been applied the selected features. The results showed that the optimized deep RNN with the selected features by univariate has achieved the highest performance for CV and the testing results compared to the other models.
The evolution and expansion of IoT devices reduced human efforts, increased resource utilization, and saved time; however, IoT devices create significant challenges such as lack of security and privacy, making them more vulnerable to IoT-based botnet attacks. There is a need to develop efficient and faster models which can work in real-time with efficiency and stability. The present investigation developed two novels, Deep Neural Network (DNN) models, DNNBoT1 and DNNBoT2, to detect and classify well-known IoT botnet attacks such as Mirai and BASHLITE from nine compromised industrial-grade IoT devices. The utilization of PCA was made to feature extraction and improve effectual and accurate Botnet classification in IoT environments. The models were designed based on rigorous hyperparameters tuning with GridsearchCV. Early stopping was utilized to avoid the effects of overfitting and underfitting for both DNN models. The in-depth assessment and evaluation of the developed models demonstrated that accuracy and efficiency are some of the best-performed models. The novelty of the present investigation, with developed models, bridge the gaps by using a real dataset with high accuracy and a significantly lower false alarm rate. The results were evaluated based on earlier studies and deemed efficient at detecting botnet attacks using the real dataset.
E-Vehicles are used for transportation and, with a vehicle-to-grid optimization approach, they may be used for supplying a backup source of energy for renewable energy sources. Renewable energy sources are integrated to maintain the demand of consumers, mitigate the active and reactive power losses, and maintain the voltage profile. Renewable energy sources are not supplied all day and, to meet the peak demand, extra electricity may be supplied through e-Vehicles. E-Vehicles with random integration may cause system unbalancing problems and need a solution. The objective of this paper is to integrate e-Vehicles with the grid as a backup source of energy through the grid-to-vehicle optimization approach by reducing active and reactive power losses and maintaining voltage profile. In this paper, three case studies are discussed: (i) integration of renewable energy sources alone; (ii) integration of e-Vehicles alone; (iii) integration of renewable energy sources and e-Vehicles in hybrid mode. The simulation results show the effectiveness of the integration and the active and reactive power losses are minimum when we used the third case.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.