2021
DOI: 10.1016/j.apenergy.2021.117623
|View full text |Cite
|
Sign up to set email alerts
|

Day-ahead city natural gas load forecasting based on decomposition-fusion technique and diversified ensemble learning model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 36 publications
(12 citation statements)
references
References 47 publications
0
10
0
Order By: Relevance
“…2. The use of ensemble learning in combinatorial models has to some extent enhanced the generalization performance of the models, but the ability to capture nonlinear features still relies on the sub-models in the ensemble model [27]. However, the integration model combined with the modal decomposition data preprocessing method will bring greater time consumption to the model's training and weight optimization of each model [28], which is more computationally time-consuming and requires better equipment requirements than using only the modal decomposition preprocessing method.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…2. The use of ensemble learning in combinatorial models has to some extent enhanced the generalization performance of the models, but the ability to capture nonlinear features still relies on the sub-models in the ensemble model [27]. However, the integration model combined with the modal decomposition data preprocessing method will bring greater time consumption to the model's training and weight optimization of each model [28], which is more computationally time-consuming and requires better equipment requirements than using only the modal decomposition preprocessing method.…”
Section: Introductionmentioning
confidence: 99%
“…At the same time, these submodels may not be of the same type, and selecting the appropriate prediction model for each decomposed submodel requires subjective selection or automatic selection using algorithms. These optimization processes all require a large amount of computational time, and sometimes improper decomposition can result in the final prediction results not being significantly improved or even decreased [11], resulting in the low cost‐effectiveness of the model. The use of ensemble learning in combinatorial models has to some extent enhanced the generalization performance of the models, but the ability to capture nonlinear features still relies on the sub‐models in the ensemble model [27]. However, the integration model combined with the modal decomposition data preprocessing method will bring greater time consumption to the model's training and weight optimization of each model [28], which is more computationally time‐consuming and requires better equipment requirements than using only the modal decomposition preprocessing method. Multilayer composite models are better at capturing nonlinear features than basic single models.…”
Section: Introductionmentioning
confidence: 99%
“…Data preprocessing algorithms have been focused on dimensionality reduction, improving input data quality of the prediction module, and enhancing prediction accuracy. 23 However, dimensionality reduction has the intrinsic disadvantage of information loss, leading to an inevitable error. 24,25 Thus, a data preprocessing algorithm must be proposed that can effectively detect and correct noisy data caused by various experimental errors and preserve the objective features of the original data as much as possible.…”
Section: Introductionmentioning
confidence: 99%
“…31,32 The above factors significantly influence prediction accuracy. According to the forecasting results provided by Li et al, 23 the prediction model performance varied by season in Beijing, with the worst in winter when the NGC was relatively high. In addition, Hribar et al 30 observed that the mean error during public holidays in Ljubljana was higher than that for ordinary days.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation