2019
DOI: 10.1007/978-3-030-31760-7_3
|View full text |Cite
|
Sign up to set email alerts
|

Representation Learning in Power Time Series Forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 13 publications
0
8
0
Order By: Relevance
“…Compared to research areas like computer vision, there is limited research on ITL for renewable power forecasts [6,7]. There has been some work to learn a transferable representation of the input utilizing autoencoders [8,9,10,11]. While the principle approach of transferring an autoencoder for a target is combinable with our approach, we argue that considering the conditional distribution of the power forecast is more relevant for model selection and combination.…”
Section: Related Workmentioning
confidence: 96%
“…Compared to research areas like computer vision, there is limited research on ITL for renewable power forecasts [6,7]. There has been some work to learn a transferable representation of the input utilizing autoencoders [8,9,10,11]. While the principle approach of transferring an autoencoder for a target is combinable with our approach, we argue that considering the conditional distribution of the power forecast is more relevant for model selection and combination.…”
Section: Related Workmentioning
confidence: 96%
“…We ensure that the network is not learning an identity mapping during training by having a lower dimension in the bottleneck than in the input. Other alternatives to avoid this problem are, e.g., denoising autoencoders, where we induce random noise on the input features [1]. However, we excluded those variants as the current results suggest that they are not beneficial over vanilla autoencoders for day-ahead power forecasting [1].…”
Section: Inputs Outputsmentioning
confidence: 97%
“…The article closest to ours is [1]. This article compared traditional feature extraction methods with feature extraction techniques from deep learning.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations