2019
DOI: 10.1007/978-3-030-30241-2_41
|View full text |Cite
|
Sign up to set email alerts
|

Hyper-parameter Optimization of Multi-attention Recurrent Neural Network for Battery State-of-Charge Forecasting

Abstract: In the past years, a rapid deployment of battery energy storage systems for diverse smart grid services has been seen in electric power systems. However, a cost-effective and multi-objective application of these services necessitates a utilization of forecasting methods for a development of efficient capacity allocation and risk management strategies over the uncertainty of battery state-of-charge. The aim of this paper is to assess the tuning efficiency of multi-attention recurrent neural network for multi-st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 10 publications
(11 reference statements)
0
6
0
Order By: Relevance
“…MOTPE allows for the optimization of multiple performance metrics at once by analyzing the results of the current models against the currently observed frontier of solutions via using a function that calculates the expected hypervolume improvement at each step of its optimization. To the best of our knowledge, OPTUNA has not been used extensively for software engineering problems (but it has been used in domains such as disease diagnosis [77], equipment performance forecast [39] and others). We assert that it is important to baseline our new methods against OPTUNA (and MOTPE) since this is a clear extension and improvement of HYPEROPT.…”
Section: From Effort To Health Estimationmentioning
confidence: 99%
“…MOTPE allows for the optimization of multiple performance metrics at once by analyzing the results of the current models against the currently observed frontier of solutions via using a function that calculates the expected hypervolume improvement at each step of its optimization. To the best of our knowledge, OPTUNA has not been used extensively for software engineering problems (but it has been used in domains such as disease diagnosis [77], equipment performance forecast [39] and others). We assert that it is important to baseline our new methods against OPTUNA (and MOTPE) since this is a clear extension and improvement of HYPEROPT.…”
Section: From Effort To Health Estimationmentioning
confidence: 99%
“…The learning rate is adjusted for the Adam optimization algorithm in the model. The optimization process was running for 100 trials on each of the datasets following the sequential automatic hyperparameter optimization presented in [54]. The inputs of the variational MARNN model were the hyper-parameters selected by the TPE at each trial and training and validation data of the datasets.…”
Section: Hyper-parameter Optimizationmentioning
confidence: 99%
“…Each of the trials was restricted to 20 epochs with the early stopping criterion equal to 5. More details about TPE optimizer can be found in [53] and its application to the MARNN model is described in [54].…”
Section: Hyper-parameter Optimizationmentioning
confidence: 99%
“…An attention mechanism introduced in [14] is a recent advancement in the recurrent deep learning that further improves the results of vanilla RNN in memorizing long source sequences. The main difference with the RNN is that the attention develops an aggregate context vector that is filtered specifically for each output time step and memorized in the decoder layer [15]. In this study, we utilize MARNN model described in [16] that deploys lag sample values from previous input sequences at the decoding time.…”
Section: E Multi-attention Recurrent Neural Networkmentioning
confidence: 99%