“…Moreover, OCSVM and LOF both had a good performance, but OCSVM was statistically superior to LOF. Abdu-Aguye et al [12] also achieved high accuracy at detecting FGSM adversarial examples with an OCSVMbased scheme. Unlike this work, their scheme focused on defending classification models and did not vary attack patterns and attack magnitudes.…”
Section: Discussionmentioning
confidence: 99%
“…They used the retraining defense strategy to improve the models' robustness. Abdu-Aguye et al [12] proposed using OCSVM to classify samples as original or perturbed. The work was based on the attacks and datasets presented in [7].…”
Section: Defense Approachesmentioning
confidence: 99%
“…The hyper-parameters used for each of these models are shown in Table 2. [1,2,4,8,12,24,48], [1,4,16,32], [1,2,4,8,16,32], [1,3,6,12,24], [1,2,6,12,24], [1,2,4,8,16], [1,4,16], [1,2,4,8], [1,4,8] Blocks 1, 2…”
Section: Generation Forecasting Modulementioning
confidence: 99%
“…Different schemes [10][11][12] have been proposed recently to defend ML algorithms against adversarial examples. Studies [10,11] made use of adversarial training.…”
Section: Introductionmentioning
confidence: 99%
“…In this technique, data used for model training include adversarial samples especially crafted to make it more resilient against this kind of attack. Conversely, Abdu-Aguye et al [12] proposed an approach that detects adversarial samples during the test phase. Despite their encouraging results, these studies only focused on protecting ML models designed for classification tasks.…”
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples.
“…Moreover, OCSVM and LOF both had a good performance, but OCSVM was statistically superior to LOF. Abdu-Aguye et al [12] also achieved high accuracy at detecting FGSM adversarial examples with an OCSVMbased scheme. Unlike this work, their scheme focused on defending classification models and did not vary attack patterns and attack magnitudes.…”
Section: Discussionmentioning
confidence: 99%
“…They used the retraining defense strategy to improve the models' robustness. Abdu-Aguye et al [12] proposed using OCSVM to classify samples as original or perturbed. The work was based on the attacks and datasets presented in [7].…”
Section: Defense Approachesmentioning
confidence: 99%
“…The hyper-parameters used for each of these models are shown in Table 2. [1,2,4,8,12,24,48], [1,4,16,32], [1,2,4,8,16,32], [1,3,6,12,24], [1,2,6,12,24], [1,2,4,8,16], [1,4,16], [1,2,4,8], [1,4,8] Blocks 1, 2…”
Section: Generation Forecasting Modulementioning
confidence: 99%
“…Different schemes [10][11][12] have been proposed recently to defend ML algorithms against adversarial examples. Studies [10,11] made use of adversarial training.…”
Section: Introductionmentioning
confidence: 99%
“…In this technique, data used for model training include adversarial samples especially crafted to make it more resilient against this kind of attack. Conversely, Abdu-Aguye et al [12] proposed an approach that detects adversarial samples during the test phase. Despite their encouraging results, these studies only focused on protecting ML models designed for classification tasks.…”
With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples.
In recent years, researchers proposed a variety of deep learning models for wind power forecasting. These models predict the wind power generation of wind farms or entire regions more accurately than traditional machine learning algorithms or physical models. However, latest research has shown that deep learning models can often be manipulated by adversarial attacks. Since wind power forecasts are essential for the stability of modern power systems, it is important to protect them from this threat. In this work, we investigate the vulnerability of two different forecasting models to targeted, semi-targeted, and untargeted adversarial attacks. We consider a long short-term memory (LSTM) network for predicting the power generation of individual wind farms and a convolutional neural network (CNN) for forecasting the wind power generation throughout Germany. Moreover, we propose the Total Adversarial Robustness Score (TARS), an evaluation metric for quantifying the robustness of regression models to targeted and semi-targeted adversarial attacks. It assesses the impact of attacks on the model’s performance, as well as the extent to which the attacker’s goal was achieved, by assigning a score between 0 (very vulnerable) and 1 (very robust). In our experiments, the LSTM forecasting model was fairly robust and achieved a TARS value of over 0.78 for all adversarial attacks investigated. The CNN forecasting model only achieved TARS values below 0.10 when trained ordinarily, and was thus very vulnerable. Yet, its robustness could be significantly improved by adversarial training, which always resulted in a TARS above 0.46.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.