2021 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm) 2021
DOI: 10.1109/smartgridcomm51999.2021.9631993
|View full text |Cite
|
Sign up to set email alerts
|

Neural network interpretability for forecasting of aggregated renewable generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…We recommend reading [72][73][74] in order to explore methods of increasing the interpretability of a forecaster. Papers that emphasize the interpretability of solar forecasting models include [61,62,75]. Taking all of this into consideration, we recommend the following methodology when building a solar forecasting model: Start the process with fundamental baseline models and progressively advance from simpler models, such as linear regression (LR), to more intricate ones, including gradient boosting (GB) models, culminating with recurrent neural networks (RNNs), which use skip connections (ResNet).…”
Section: Neural Network (Deep Learning)mentioning
confidence: 99%
“…We recommend reading [72][73][74] in order to explore methods of increasing the interpretability of a forecaster. Papers that emphasize the interpretability of solar forecasting models include [61,62,75]. Taking all of this into consideration, we recommend the following methodology when building a solar forecasting model: Start the process with fundamental baseline models and progressively advance from simpler models, such as linear regression (LR), to more intricate ones, including gradient boosting (GB) models, culminating with recurrent neural networks (RNNs), which use skip connections (ResNet).…”
Section: Neural Network (Deep Learning)mentioning
confidence: 99%
“…Ref. [142] proposed a binary classification neural network and a regression neural network for solar power generation prediction. In order to achieve interpretability, they adopted three feature attribution methods, Integrated Gradients, Expected Gradients, and DeepLIFT to evaluate the contribution of features.…”
Section: Energy Forecastingmentioning
confidence: 99%
“…43 −45 Compared to other XAI methods, IG simultaneously satisfies two axioms called sensitivity and implementation invariance and shows better performance in interpreting deep learning models. 46,47 Employing IG to interpret the predictive model is a promising way to quantitatively analyze the spatiotemporal drivers of the HABs process.…”
Section: Introductionmentioning
confidence: 99%
“…Using a hybrid deep learning model to excavate the spatial and temporal information from the input data is a promising way to improve the accuracy of HAB forecasting. However, the data-driven predictive model is hard to interpret the internal mechanism of the HABs. , In recent years, strong emphases have been placed on interpretability, transparency, and reliability of deep learning models, leading to a new area of research referred to explainable artificial intelligence (XAI). Numerous gradient-based, surrogate-model, and perturbation-based methods have been developed to measure the importance of input features to the model output. , Integrated gradients (IG) is a recently developed gradient-based method, which quantifies the attribution of each input dimension by aggregating gradients along a linear path between the sample and the baseline. Compared to other XAI methods, IG simultaneously satisfies two axioms called sensitivity and implementation invariance and shows better performance in interpreting deep learning models. , Employing IG to interpret the predictive model is a promising way to quantitatively analyze the spatiotemporal drivers of the HABs process.…”
Section: Introductionmentioning
confidence: 99%