Cloud and Fog computing has emerged as a promising paradigm for the Internet of things (IoT) and cyber-physical systems (CPS). One characteristic of CPS is the reciprocal feedback loops between physical processes and cyber elements (computation, software and networking), which implies that data stream analytics is one of the core components of CPS. The reasons for this are: (i) it extracts the insights and the knowledge from the data streams generated by various sensors and other monitoring components embedded in the physical systems; (ii) it supports informed decision making; (iii) it enables feedback from the physical processes to the cyber counterparts; (iv) it eventually facilitates the integration of cyber and physical systems. There have been many successful applications of data streams analytics, powered by machine learning techniques, to CPS systems. Thus, it is necessary to have a survey on the particularities of the application of machine learning techniques to the CPS domain. In particular, we explore how machine learning methods should be deployed and integrated in cloud and fog architectures for better fulfilment of the requirements, e.g. mission criticality and time criticality, arising in CPS domains. To the best of our knowledge, this paper is the first to systematically study machine learning techniques for CPS data stream analytics from various perspectives, especially from a perspective that leads to the discussion and guidance of how the CPS machine learning methods should be deployed in a cloud and fog architecture.
This document is the author's post-print version, incorporating any revisions agreed during the peer-review process. Some differences between the published version and this version may remain and you are advised to consult the published version if you wish to cite from it.
The extension of the Cloud to the Edge of the network through Fog Computing can have a significant impact on the reliability and latencies of deployed applications. Recent papers have suggested a shift from VM and Container based deployments to a shared environment among applications to better utilize resources. Unfortunately, the existing deployment and optimization methods pay little attention to developing and identifying complete models to such systems which may cause large inaccuracies between simulated and physical runtime parameters. Existing models do not account for application interdependence or the locality of application resources which causes extra communication and processing delays. This paper addresses these issues by carrying out experiments in both cloud and edge systems with various scales and applications. It analyses the outcomes to derive a new reference model with data driven parameter formulations and representations to help understand the effect of migration on these systems. As a result, we can have a more complete characterization of the fog environment. This, together with optimization methods can instruct application deployment and migration and improve the overall system reliability, delay and constraint violations. An Industry 4.0 based case study with different scenarios was used to analyze and validate the effectiveness of the proposed model. Tests were deployed on physical and virtual environments with different scales. The advantages of the model based optimization methods were validated in real physical environments. Based on these tests, we have found that our model is 92% accurate on load and delay predictions for application deployments in both cloud and edge. AbstractThe extension of the Cloud to the Edge of the network through Fog Computing can have a significant impact on the reliability and latencies of deployed applications. Recent papers have suggested a shift from VM and Container based deployments to a shared environment among applications to better utilize resources. Unfortunately, the existing deployment and optimization methods pay little attention to developing and identifying complete models to such systems which may cause large inaccuracies between simulated and physical runtime parameters. Existing models do not account for application interdependence or the locality of application resources which causes extra communication and processing delays. This paper addresses these issues by carrying out experiments in both cloud and edge systems with various scales and applications. It analyses the outcomes to derive a new reference model with data driven parameter formulations and representations to help understand the effect of migration on these systems. As a result, we can have a more complete characterization of the fog environment. This, together with optimization methods can instruct application deployment and migration and improve the overall system reliability, delay and constraint violations. An Industry 4.0 based case study with different scenarios was used...
Studies have demonstrated that changes in the climate affect wind power forecasting under different weather conditions. Theoretically, accurate prediction of both wind power output and weather changes using statistics-based prediction models is difficult. In practice, traditional machine learning models can perform long-term wind power forecasting with a mean absolute percentage error (MAPE) of 10% to 17%, which does not meet the engineering requirements for our renewable energy project. Deep learning networks (DLNs) have been employed to obtain the correlations between meteorological features and power generation using a multilayer neural convolutional architecture with gradient descent algorithms to minimize estimation errors. This has wide applicability to the field of wind power forecasting. Therefore, this study aimed at the long-term (24–72-h ahead) prediction of wind power with an MAPE of less than 10% by using the Temporal Convolutional Network (TCN) algorithm of DLNs. In our experiment, we performed TCN model pretraining using historical weather data and the power generation outputs of a wind turbine from a Scada wind power plant in Turkey. The experimental results indicated an MAPE of 5.13% for 72-h wind power prediction, which is adequate within the constraints of our project. Finally, we compared the performance of four DLN-based prediction models for power forecasting, namely, the TCN, long short-term memory (LSTM), recurrent neural network (RNN), and gated recurrence unit (GRU) models. We validated that the TCN outperforms the other three models for wind power prediction in terms of data input volume, stability of error reduction, and forecast accuracy.
With rapid advancements in in-vehicle network (IVN) technology, the demand for multiple advanced functions and networking in electric vehicles (EVs) has recently increased. To enable various intelligent functions, the electrical system of existing vehicles incorporates a controller area network (CAN) bus system that enables communication among electrical control units (ECUs). In practice, traditional network-based intrusion detection systems (NIDSs) cannot easily identify threats to the CAN bus system. Therefore, it is necessary to develop a new type of NIDS—namely, on-the-move Intrusion Detection System (OMIDS)—to categorise these threats. Accordingly, this paper proposes an intrusion detection model for IVNs, based on the VGG16 classifier deep learning model, to learn attack behaviour characteristics and classify threats. The experimental dataset was provided by the Hacking and Countermeasure Research Lab (HCRL) to validate classification performance for denial of service (DoS), fuzzy attacks, spoofing gear, and RPM in vehicle communications. The proposed classifier’s performance was compared with that of the XBoost ensemble learning scheme to identify threats from in-vehicle networks. In particular, the test cases can detect anomalies in terms of accuracy, precision, recall, and F1-score to ensure detection accuracy and identify false alarm threats. The experimental results show that the classification accuracy of the dataset for HCRL Car-Hacking by the VGG16 and XBoost classifiers (n = 50) reached 97.8241% and 99.9995% for the 5-subcategory classification results on the testing data, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.