The stability control of nominal frequency and terminal voltage in an interconnected power system (IPS) is always a challenging task for researchers. The load variation or any disturbance affects the active and reactive power demands, which badly influence the normal working of IPS. In order to maintain frequency and terminal voltage at rated values, controllers are installed at generating stations to keep these parameters within the prescribed limits by varying the active and reactive power demands. This is accomplished by load frequency control (LFC) and automatic voltage regulator (AVR) loops, which are coupled to each other. Due to the complexity of the combined AVR-LFC model, the simultaneous control of frequency and terminal voltage in an IPS requires an intelligent control strategy. The performance of IPS solely depends upon the working of the controllers. This work presents the exploration of control methodology based on a proportional integral–proportional derivative (PI-PD) controller with combined LFC-AVR in a multi-area IPS. The PI-PD controller was tuned with recently developed nature-inspired computation algorithms including the Archimedes optimization algorithm (AOA), learner performance-based behavior optimization (LPBO), and modified particle swarm optimization (MPSO). In the earlier part of this work, the proposed methodology was applied to a two-area IPS, and the output responses of LPBO-PI-PD, AOA-PI-PD, and MPSO-PI-PD control schemes were compared with an existing nonlinear threshold-accepting algorithm-based PID (NLTA-PID) controller. After achieving satisfactory results in the two-area IPS, the proposed scheme was examined in a three-area IPS with combined AVR and LFC. Finally, the reliability and efficacy of the proposed methodology was investigated on a three-area IPS with LFC-AVR with variations in the system parameters over a range of  ± 50%. The simulation results and a comprehensive comparison between the controllers clearly demonstrates that the proposed control schemes including LPBO-PI-PD, AOA-PI-PD, and MPSO-PI-PD are very reliable, and they can effectively stabilize the frequency and terminal voltage in a multi-area IPS with combined LFC and AVR.
Healthcare occupies a central role in sustainable societies and has an undeniable impact on the well-being of individuals. However, over the years, various diseases have adversely affected the growth and sustainability of these societies. Among them, heart disease is escalating rapidly in both economically settled and undeveloped nations and leads to fatalities around the globe. To reduce the death ratio caused by this disease, there is a need for a framework to continuously monitor a patient’s heart status, essentially doing early detection and prediction of heart disease. This paper proposes a scalable Machine Learning (ML) and Internet of Things-(IoT-) based three-layer architecture to store and process a large amount of clinical data continuously, which is needed for the early detection and monitoring of heart disease. Layer 1 of the proposed framework is used to collect data from IoT wearable/implanted smart sensor nodes, which includes various physiological measures that have significant impact on the deterioration of heart status. Layer 2 stores and processes the patient data on a local web server using various ML classification algorithms. Finally, Layer 3 is used to store the critical data of patients on the cloud. The doctor and other caregivers can access the patient health conditions via an android application, provide services to the patient, and inhibit him/her from further damage. Various performance evaluation measures such as accuracy, sensitivity, specificity, F1-measure, MCC-score, and ROC curve are used to check the efficiency of our proposed IoT-based heart disease prediction framework. It is anticipated that this system will assist the healthcare sector and the doctors in diagnosing heart patients in the initial phases.
Distributed denial of service (DDoS) attacks pose an increasing threat to businesses and government agencies. They harm internet businesses, limit access to information and services, and damage corporate brands. Attackers use application layer DDoS attacks that are not easily detectable because of impersonating authentic users. In this study, we address novel application layer DDoS attacks by analyzing the characteristics of incoming packets, including the size of HTTP frame packets, the number of Internet Protocol (IP) addresses sent, constant mappings of ports, and the number of IP addresses using proxy IP. We analyzed client behavior in public attacks using standard datasets, the CTU-13 dataset, real weblogs (dataset) from our organization, and experimentally created datasets from DDoS attack tools: Slow Lairs, Hulk, Golden Eyes, and Xerex. A multilayer perceptron (MLP), a deep learning algorithm, is used to evaluate the effectiveness of metrics-based attack detection. Simulation results show that the proposed MLP classification algorithm has an efficiency of 98.99% in detecting DDoS attacks. The performance of our proposed technique provided the lowest value of false positives of 2.11% compared to conventional classifiers, i.e., Naïve Bayes, Decision Stump, Logistic Model Tree, Naïve Bayes Updateable, Naïve Bayes Multinomial Text, AdaBoostM1, Attribute Selected Classifier, Iterative Classifier, and OneR.
To avoid dire situations, the medical sector must develop various methods for quickly and accurately identifying infections in remote regions. The primary goal of the proposed work is to create a wearable device that uses the Internet of Things (IoT) to carry out several monitoring tasks. To decrease the amount of communication loss as well as the amount of time required to wait before detection and improve detection quality, the designed wearable device is also operated with a multi-objective framework. Additionally, a design method for wearable IoT devices is established, utilizing distinct mathematical approaches to solve these objectives. As a result, the monitored parametric values are saved in a different IoT application platform. Since the proposed study focuses on a multi-objective framework, state design and deep learning (DL) optimization techniques are combined, reducing the complexity of detection in wearable technology. Wearable devices with IoT processes have even been included in current methods. However, a solution cannot be duplicated using mathematical approaches and optimization strategies. Therefore, developed wearable gadgets can be applied to real-time medical applications for fast remote monitoring of an individual. Additionally, the proposed technique is tested in real-time, and an IoT simulation tool is utilized to track the compared experimental results under five different situations. In all of the case studies that were examined, the planned method performs better than the current state-of-the-art methods.
The increasing demand for communication between networked devices connected either through an intranet or the internet increases the need for a reliable and accurate network defense mechanism. Network intrusion detection systems (NIDSs), which are used to detect malicious or anomalous network traffic, are an integral part of network defense. This research aims to address some of the issues faced by anomaly-based network intrusion detection systems. In this research, we first identify some limitations of the legacy NIDS datasets, including a recent CICIDS2017 dataset, which lead us to develop our novel dataset, CIPMAIDS2023-1. Then, we propose a stacking-based ensemble approach that outperforms the overall state of the art for NIDS. Various attack scenarios were implemented along with benign user traffic on the network topology created using graphical network simulator-3 (GNS-3). Key flow features are extracted using cicflowmeter for each attack and are evaluated to analyze their behavior. Several different machine learning approaches are applied to the features extracted from the traffic data, and their performance is compared. The results show that the stacking-based ensemble approach is the most promising and achieves the highest weighted F1-score of 98.24%.
Due to the enormous data sizes involved in mobile computing and multimedia data transfer, it is possible that more data traffic may be generated, necessitating the use of data compression. As a result, this paper investigates how mobile computing data are compressed under all transmission scenarios. The suggested approach integrates deep neural networks (DNN) at high weighting functionalities for compression modes. The proposed method employs appropriate data loading and precise compression ratios for successful data compression. The accuracy of multimedia data that must be conveyed to various users is higher even though compression ratios are higher. The same data are transferred at significantly higher compression ratios, which save time while also minimizing data mistakes that may occur at the receiver. The DNN process also includes a visible parameter for handling high data-weight situations. The visible parameter optimizes the data results, allowing simulation tools to readily observe the compressed data. A comparison case study was created for five different scenarios in order to confirm the results, and it shows that the suggested strategy is significantly more effective than existing methods in roughly 63 percent of the cases.
The exponential growth of the edge-based Internet-of-Things (IoT) services and its ecosystems has recently led to a new type of communication network, the Low Power Wide Area Network (LPWAN). This standard enables low-power, long-range, and low-data-rate communications. Long Range Wide Area Network (LoRaWAN) is a recent standard of LPWAN that incorporates LoRa wireless into a networked infrastructure. Consequently, the consumption of smart End Devices (EDs) is a major challenge due to the highly dense network environment characterised by limited battery life, spectrum coverage, and data collisions. Intelligent and efficient service provisioning is an urgent need of a network to streamline the networks and solve these problems. This paper proposes a Dynamic Reinforcement Learning Resource Allocation (DRLRA) approach to allocate efficient resources such as channel, Spreading Factor (SF), and Transmit Power (Tp) to EDs that ultimately improve the performance in terms of consumption and reliability. The proposed model is extensively simulated and evaluated with the currently implemented algorithms such as Adaptive Data Rate (ADR) and Adaptive Priority-aware Resource Allocation (APRA) using standard and advanced evaluation metrics. The proposed work is properly cross validated to show completely unbiased results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.