In many enterprises and the private sector, the Internet of Things (IoT) has spread globally. The growing number of different devices connected to the IoT and their various protocols have contributed to the increasing number of attacks, such as denial-of-service (DoS) and remote-to-local (R2L) ones. There are several approaches and techniques that can be used to construct attack detection models, such as machine learning, data mining, and statistical analysis. Nowadays, this technique is commonly used because it can provide precise analysis and results. Therefore, we decided to study the previous literature on the detection of IoT attacks and machine learning in order to understand the process of creating detection models. We also evaluated various datasets used for the models, IoT attack types, independent variables used for the models, evaluation metrics for assessment of models, and monitoring infrastructure using DevSecOps pipelines. We found 49 primary studies, and the detection models were developed using seven different types of machine learning techniques. Most primary studies used IoT device testbed datasets, and others used public datasets such as NSL-KDD and UNSW-NB15. When it comes to measuring the efficiency of models, both numerical and graphical measures are commonly used. Most IoT attacks occur at the network layer according to the literature. If the detection models applied DevSecOps pipelines in development processes for IoT devices, they were more secure. From the results of this paper, we found that machine learning techniques can detect IoT attacks, but there are a few issues in the design of detection models. We also recommend the continued use of hybrid frameworks for the improved detection of IoT attacks, advanced monitoring infrastructure configurations using methods based on software pipelines, and the use of machine learning techniques for advanced supervision and monitoring.
In the past few years, several conceptual approaches have been proposed for the specification of the main multidimensional (MD) properties of the data warehouse (DW) repository. However, most of them deal with isolated aspects of the DW and do not provide designers with an integrated and standard method for designing the whole DW life cycle (ETL processes, data sources, DW repository and so on).Some approaches are depending on specific platform or neglecting important issues in DW design life cycle. Extraction-transformation-loading (ETL) processes play an important role in data warehouse architecture because they are responsible of integrating data from heterogeneous data sources into the DW repository. It is vitally important to consider security requirements from the earliest stages of the development process. This paper proposes a conceptual model to refresh data warehouse by (insert, update, delete) data using ETL processes and considering DW security requirements. Firstly, the proposed ETL model is based on Unified Modeling Language (UML), which allows us to accomplish the conceptual modeling of ETL processes .secondly; this part focuses on how to integrate the proposed ETL model with the DW model. The proposed DW model depends on the Model Driven Architecture (MDA). MDA is a standard framework for software development that addresses the complete life cycle of designing, deploying, integrating, and managing applications by using models in software development. This paper proposes an integrated conceptual model for addressing temporal data warehouse security requirements (CMTDWS). One key advantage of our model is that we accomplish the conceptual modeling of secure temporal DWs independently of the target platform where the DW has to be implemented, allowing the implementation of the corresponding TDWs on any secure commercial database management system.
Due to the huge number of connected Internet of Things (IoT) devices within a network, denial of service and flooding attacks on networks are on the rise. IoT devices are disrupted and denied service because of these attacks. In this study, we proposed a novel hybrid meta-heuristic adaptive particle swarm optimization–whale optimizer algorithm (APSO-WOA) for optimization of the hyperparameters of a convolutional neural network (APSO-WOA-CNN). The APSO–WOA optimization algorithm’s fitness value is defined as the validation set’s cross-entropy loss function during CNN model training. In this study, we compare our optimization algorithm with other optimization algorithms, such as the APSO algorithm, for optimization of the hyperparameters of CNN. In model training, the APSO–WOA–CNN algorithm achieved the best performance compared to the FNN algorithm, which used manual parameter settings. We evaluated the APSO–WOA–CNN algorithm against APSO–CNN, SVM, and FNN. The simulation results suggest that APSO–WOA–CNf[N is effective and can reliably detect multi-type IoT network attacks. The results show that the APSO–WOA–CNN algorithm improves accuracy by 1.25%, average precision by 1%, the kappa coefficient by 11%, Hamming loss by 1.2%, and the Jaccard similarity coefficient by 2%, as compared to the APSO–CNN algorithm, and the APSO–CNN algorithm achieves the best performance, as compared to other algorithms.
The extensive amounts of knowledge and data stored in medical databases require the development of specialized tools for storing and accessing of data, data analysis and effective use of stored knowledge of data. The goal is to present how methods and tools for intelligent data analysis are helpful in narrowing the increasing gap between data gathering and data comprehension. This goal is achieved by applying Association Rules Technique to help in analyzing and retrieving hidden patterns for a large volume of data collected in a medical database for a large hospital. The approach used led to results normally unattainable using conventional techniques. Specifically, an episode database for Nephrology examinations, signs, symptoms and diagnosis is used.Second International Conference on the Digital Society 0-7695-3087-7/08 $25.00
Securing e-commerce sites has become a necessity as they process critical and sensitive data to customers and organizations. When a customer navigates through an e-commerce site his/her clicks are recorded in web log file. Analyzing these log files using data mining reveal many interesting patterns. These results are used in many different applications and recently in detecting attacks on web. In order to improve quality of data and consequently the mining results data in log files need first to be preprocessed. In this paper, we will discuss how different web log files with different formats will be combined together in one unified format using XML in order to track and extract more attacks. And because log files usually contain noisy and ambiguous data this paper will show how data will be preprocessed before applying mining process in order to detect attacks. We will also discuss the difference between log preprocessing for web intrusion and for web usage mining.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.