Deep learning technologies, due to their advanced pattern extraction and recognition of highdimensional data, have been widely adopted into multisensor-based fire detection systems. Since deep learning approaches can generate erroneous predictions due to incomplete training datasets, a retraining process over unseen observations is needed. However, storing a large amount of data from continuous multisensor streams and labeling them to create a retraining dataset are costly and time-consuming. In this paper, we propose an active learning framework based on an informative experience memory that is populated with meaningful retraining data by assessing the uncertainty of the data. In the proposed framework, the deep learning model predicts fire occurrence and estimates model uncertainty by taking advantage of a Bayesian neural network using Monte Carlo dropout. By storing only higher uncertain data points into the fixed-size informative experience memory and querying them to the system managers, the storage and labeling costs are minimized while improving performance. To evaluate our active learning framework with different neural network structures, we develop three Bayesian neural networks based on conventional classification networks, including the feedforward neural network, fully convolutional network, and long short-term memory. We further investigate various uncertainty assessment scoring methods for classification tasks such as entropy, BALD, variation ratios, and mean STD. Experiments on a real dataset show that the Bayesian FCN using the BALD assessment method has the highest performance gain with an F1 score of 0.95, with an improvement of 24% using only 700 data points.
As natural disasters become extensive, due to various environmental problems, such as the global warming, it is difficult for the disaster management systems to rapidly provide disaster prediction services, due to complex natural phenomena. Digital twins can effectively provide the services using high-fidelity disaster models and real-time observational data with distributed computing schemes. However, the previous schemes take little account of the correlations between environmental data of disasters, such as landscapes and weather. This causes inaccurate computing load predictions resulting in unbalanced load partitioning, which increases the prediction service times of the disaster management agencies. In this paper, we propose a novel distributed computing framework to accelerate the prediction services through semantic analyses of correlations between the environmental data. The framework combines the data into disaster semantic data to represent the initial disaster states, such as the sizes of wildfire burn scars and fuel models. With the semantic data, the framework predicts computing loads using the convolutional neural network-based algorithm, partitions the simulation model into balanced sub-models, and allocates the sub-models into distributed computing nodes. As a result, the proposal shows up to 38.5% of the prediction time decreases, compared to the previous schemes.
A cyber physical system (CPS) is a distributed control system in which the cyber part and physical part are tightly interconnected. A representative CPS is an electric vehicle (EV) composed of a complex system and information and communication technology (ICT), preliminary verified through simulations for performance prediction and a quantitative analysis is essential because an EV comprises a complex CPS. This paper proposes an FMI-based distributed CPS simulation framework (F-DCS) adopting a redundancy reduction algorithm (RRA) for the validation of EV simulation. Furthermore, the proposed algorithm was enhanced to ensure an efficient simulation time and accuracy by predicting and reducing repetition patterns involved during the simulation progress through advances in the distributed CPS simulation. The proposed RRA improves the simulation speed and efficiency by avoiding the repeated portions of a given driving cycle while still maintaining accuracy. To evaluate the performance of the proposed F-DCS, an EV model was simulated by adopting the RRA. The results confirm that the F-DCS with RRA efficiently reduced the simulation time (over 30%) while maintaining a conventional accuracy. Furthermore, the proposed F-DCS was applied to the RRA, which provided results reflecting real-time sensor information.
Disaster management systems require accurate disaster monitoring and prediction services to reduce damages caused by natural disasters. Digital twins of natural environments can provide the services for the systems with physics-based and data-driven disaster models. However, the digital twins might generate erroneous disaster prediction due to the impracticability of defining high-fidelity physics-based models for complex natural disaster behavior and the dependency of data-driven models on the training dataset. This causes disaster management systems to inappropriately use disaster response resources, including medical personnel, rescue equipment and relief supplies, to ensure that it may increase the damages from the natural disasters. This study proposes a digital twin architecture to provide accurate disaster prediction services with a similarity-based hybrid modeling scheme. The hybrid modeling scheme creates a hybrid disaster model that compensates for the errors of physics-based prediction results with a data-driven error correction model to enhance the prediction accuracy. The similarity-based hybrid modeling scheme reduces errors from the data dependency of the hybrid model by constructing a training dataset using similarity assessments between the target disaster and the historical disasters. Evaluations in wildfire scenarios show that the digital twin decreases prediction errors by approximately 50% compared with those of the existing schemes.
Named data networking (NDN) is a future network architecture that replaces IP-oriented communication with content-oriented communication and has new features such as cache, multiple paths, and multiple sources. Services such as video streaming, to which NDN can be applied in the future, can cause congestion if data is concentrated on one of the nodes during high demand. To solve this problem, sending rate control methods such as TCP congestion control have been proposed, but they do not adequately reflect the characteristics of NDN. Therefore, we use reinforcement learning and deep learning to propose a congestion control method that takes advantage of multipath features. The intelligent forwarding strategy for congestion control using Q-learning and long short-term memory in NDN proposed in this paper is divided into two phases. The first phase uses an LSTM model to train a pending interest table (PIT) entry rate that can be used as an indicator to detect congestion by knowing the amount of data returned. In the second phase, it is forwarded to an alternative path that is not congestive via Q-learning based on the PIT entry rate predicted by the trained LSTM model. The simulation results show that the proposed method increases the data reception rate by 6.5% and 19.5% and decreases the packet drop rate by 7.3% and 17.2% compared to an adaptive SRTT-based forwarding strategy (ASF) and BestRoute.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.