2022
DOI: 10.1016/j.egyr.2022.08.180
|View full text |Cite
|
Sign up to set email alerts
|

A Multi-step ahead photovoltaic power forecasting model based on TimeGAN, Soft DTW-based K-medoids clustering, and a CNN-GRU hybrid neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(10 citation statements)
references
References 43 publications
0
4
0
Order By: Relevance
“…It introduces the concept of supervised loss and an embedding network to reduce the adversarial learning space's dimensionality. It is able to generate training to handle a mixed-data setting, where both static (attributes) and sequential data (features) are able to be generated at the same time [16]. TimeGAN is a framework to synthesize sequential data composed of four networks: generator, discriminator, recovery, and embedder.…”
Section: Time Ganmentioning
confidence: 99%
“…It introduces the concept of supervised loss and an embedding network to reduce the adversarial learning space's dimensionality. It is able to generate training to handle a mixed-data setting, where both static (attributes) and sequential data (features) are able to be generated at the same time [16]. TimeGAN is a framework to synthesize sequential data composed of four networks: generator, discriminator, recovery, and embedder.…”
Section: Time Ganmentioning
confidence: 99%
“…DTW [23] distance is a typical technique in the addressing time-series related problems [24], and also practical and suitable for the building energy field [25]. In this study, it is used to measure the distance between multi-dimension data for k-medoids clustering, and it is calculated following the Equations ( 5)-( 7) [26].…”
Section: Distance Measurementioning
confidence: 99%
“…Such performance is not attainable with shallow learning models. Deep learning models, including stacked autoencoder (SAE) [27,28], deep belief network (DBN) [29,30], recurrent neural network (RNN) [31], and enhanced variants of long short-term memory (LSTM) [32,33] and gated recurrent unit (GRU) [34], offer enhanced accuracy and stability in prediction. In References [1,35], CNN was employed for data filtering and denoising, while the pooling layer reduced data dimensionality and memory requirements and improved processing speed.…”
Section: Introductionmentioning
confidence: 99%