Digital Twin technology is an emerging concept that has become the centre of attention for industry and, in more recent years, academia. The advancements in industry 4.0 concepts have facilitated its growth, particularly in the manufacturing industry. The Digital Twin is defined extensively but is best described as the effortless integration of data between a physical and virtual machine in either direction. The challenges, applications, and enabling technologies for Artificial Intelligence, Internet of Things (IoT) and Digital Twins are presented. A review of publications relating to Digital Twins is performed, producing a categorical review of recent papers. The review has categorised them by research areas: manufacturing, healthcare and smart cities, discussing a range of papers that reflect these areas and the current state of research. The paper provides an assessment of the enabling technologies, challenges and open research for Digital Twins.
As the take-up of Long Term Evolution (LTE)/4G cellular accelerates, there is increasing interest in technologies that will define the next generation (5G) telecommunication standard. This paper identifies several emerging technologies which will change and define the future generations of telecommunication standards. Some of these technologies are already making their way into standards such as 3GPP LTE, while others are still in development. Additionally, we will look at some of the research problems that these new technologies pose.
Abstract-Optimization of energy consumption in future intelligent energy networks (or Smart Grids) will be based on grid-integrated near-real-time communications between various grid elements in generation, transmission, distribution and loads. This paper discusses some of the challenges and opportunities of communications research in the areas of smart grid and smart metering. In particular, we focus on some of the key communications challenges for realizing interoperable and futureproof smart grid/metering networks, smart grid security and privacy, and how some of the existing networking technologies can be applied to energy management. Finally, we also discuss the coordinated standardization efforts in Europe to harmonize communications standards and protocols.
Anomaly detection is a problem with applications for a wide variety of domains, it involves the identification of novel or unexpected observations or sequences within the data being captured. The majority of current anomaly detection methods are highly specific to the individual use-case, requiring expert knowledge of the method as well as the situation to which it is being applied. The IoT as a rapidly expanding field offers many opportunities for this type of data analysis to be implemented however, due to the nature of the IoT this may be difficult. This review provides a background on the challenges which may be encountered when applying anomaly detection techniques to IoT data, with examples of applications for IoT anomaly detection taken from the literature. We discuss a range of approaches which have been developed across a variety of domains, not limited to Internet of Things due to the relative novelty of this application. Finally we summarise the current challenges being faced in the anomaly detection domain with a view to identifying potential research opportunities for the future.
Federated learning (FL) is a well established method for performing machine learning tasks over massively distributed data. However in settings where data is distributed in a noniid (not independent and identically distributed) fashion-as is typical in real world situations-the joint model produced by FL suffers in terms of test set accuracy and/or communication costs compared to training on iid data. We show that learning a single joint model is often not optimal in the presence of certain types of non-iid data. In this work we present a modification to FL by introducing a hierarchical clustering step (FL+HC) to separate clusters of clients by the similarity of their local updates to the global joint model. Once separated, the clusters are trained independently and in parallel on specialised models. We present a robust empirical analysis of the hyperparameters for FL+HC for several iid and non-iid settings. We show how FL+HC allows model training to converge in fewer communication rounds (significantly so under some non-iid settings) compared to FL without clustering. Additionally, FL+HC allows for a greater percentage of clients to reach a target accuracy compared to standard FL. Finally we make suggestions for good default hyperparameters to promote superior performing specialised models without modifying the the underlying federated learning communication protocol.
This paper discusses malicious false data injection attacks on the wide area measurement and monitoring system in smart grids. Firstly, methods of constructing sparse stealth attacks are developed for two typical scenarios: random attacks in which arbitrary measurements can be compromised and targeted attacks in which specified state variables are modified. It is already demonstrated that stealth attacks can always exist if the number of compromised measurements exceeds a certain value. In this paper it is found that random undetectable attacks can be accomplished by modifying only a much smaller number of measurements than this value. It is well known that protecting the system from malicious attacks can be achieved by making a certain subset of measurements immune to attacks. An efficient greedy search algorithm is then proposed to quickly find this subset of measurements to be protected to defend against stealth attacks. It is shown that this greedy algorithm has almost the same performance as the brute-force method but without the combinatorial complexity. Thirdly, a robust attack detection method is discussed. The detection method is designed based on the robust principal component analysis problem by introducing element-wise constraints. This method is shown to be able to identify the real measurements as well as attacks even when only partial observations are collected. The simulations are conducted based on IEEE test systems.
Accurate estimation of battery degradation cost is one of the main barriers for battery participating on the energy arbitrage market. This paper addresses this problem by using a model-free deep reinforcement learning (DRL) method to optimize the battery energy arbitrage considering an accurate battery degradation model. Firstly, the control problem is formulated as a Markov Decision Process (MDP). Then a noisy network based deep reinforcement learning approach is proposed to learn an optimized control policy for storage charging/discharging strategy. To address the uncertainty of electricity price, a hybrid Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) model is adopted to predict the price for the next day. Finally, the proposed approach is tested on the the historical UK wholesale electricity market prices. The results compared with model based Mixed Integer Linear Programming (MILP) have demonstrated the effectiveness and performance of the proposed framework.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.