2021
DOI: 10.1021/acs.iecr.1c00141
|View full text |Cite
|
Sign up to set email alerts
|

Data Enhancement for Data-Driven Modeling in Power Plants Based on a Conditional Variational-Adversarial Generative Network

Abstract: The quality of data directly affects the performance of data-driven models that are developed to monitor the operation of power plants. Constrained by external instructions and other factors, the operational data of a power plant generally present an irregular distribution, which leads to data imbalance. In this paper, a novel data generation method called a conditional variational autoencoder and generative adversarial networks (CVAE-GAN) is proposed. The original samples are first mapped to the latent variab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…Furthermore, the effectiveness of the generative network’s training progress is evaluated through its loss function, with a decrease in loss corresponding to an increase in generative capabilities. Meanwhile, instead of weight clipping, which can limit performance and reduce stability, WGAN introduces a gradient penalty to enforce Lipschitz constraints . With the inclusion of the gradient penalty, the loss function of WGAN is updated as follows L = E x P g false[ F ( x ) false] E x P r false[ F ( x ) false] + λ E x P false[ false( normal∇ x F ( x ) 2 1 false) 2 false] where λ serves as the weighting factor for the gradient penalty term, ∇ x F ( x ) is the gradient of the discriminative network, and P signifies the probability distribution between the virtual and original data.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, the effectiveness of the generative network’s training progress is evaluated through its loss function, with a decrease in loss corresponding to an increase in generative capabilities. Meanwhile, instead of weight clipping, which can limit performance and reduce stability, WGAN introduces a gradient penalty to enforce Lipschitz constraints . With the inclusion of the gradient penalty, the loss function of WGAN is updated as follows L = E x P g false[ F ( x ) false] E x P r false[ F ( x ) false] + λ E x P false[ false( normal∇ x F ( x ) 2 1 false) 2 false] where λ serves as the weighting factor for the gradient penalty term, ∇ x F ( x ) is the gradient of the discriminative network, and P signifies the probability distribution between the virtual and original data.…”
Section: Methodsmentioning
confidence: 99%
“…Meanwhile, instead of weight clipping, which can limit performance and reduce stability, WGAN introduces a gradient penalty to enforce Lipschitz constraints. 40 With the inclusion of the gradient penalty, the loss function of WGAN is updated as follows where λ serves as the weighting factor for the gradient penalty term, ∇ x F ( x ) is the gradient of the discriminative network, and signifies the probability distribution between the virtual and original data. The inclusion of the gradient penalty technique effectively controls the gradient, thereby mitigating problems associated with gradient vanishing and exploding.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It was used to solve the data imbalance caused by irregular distribution of power plant operation data. The research results indicated that utilizing the dataset from the model for model training could improve the accuracy of NOx emission prediction [15].…”
Section: Introductionmentioning
confidence: 98%