2020
DOI: 10.1109/tii.2019.2951622
|View full text |Cite
|
Sign up to set email alerts
|

Supervised Variational Autoencoders for Soft Sensor Modeling With Missing Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
36
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 95 publications
(36 citation statements)
references
References 30 publications
0
36
0
Order By: Relevance
“…, where the dimension of the sample is s, then the VAE [19][20][21][22] learns that the hidden layer represents the Gaussian distribution in the space. In the hidden layer representation space, assuming that the training samples conform to the Gaussian distribution, which means that all training samples are clustered into one cluster center, the samples far from the cluster center are abnormal samples.…”
Section: Anomaly Detection Model Based On Vae Given N Normal Training Samplesmentioning
confidence: 99%
See 1 more Smart Citation
“…, where the dimension of the sample is s, then the VAE [19][20][21][22] learns that the hidden layer represents the Gaussian distribution in the space. In the hidden layer representation space, assuming that the training samples conform to the Gaussian distribution, which means that all training samples are clustered into one cluster center, the samples far from the cluster center are abnormal samples.…”
Section: Anomaly Detection Model Based On Vae Given N Normal Training Samplesmentioning
confidence: 99%
“…Unlike these methods above, in this paper, we propose an end-toend deep learning framework for abnormal event detection. e proposed method is based on variational autoencoder (VAE) [19][20][21][22], which can map high-dimensional raw input data to low-dimensional hidden layer representations through deep learning technology. And, it constrains the low-dimensional hidden layer representation to conform to a Gaussian distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Discriminative methods include Multiple Imputation by Chain Equations (MICE) [13], Random Forest-based Imputation (Missforest) [14] and matrix completion [15]. Generative methods consist mostly of techniques based on Deep Learning (DL) e.g Variational Autoencoders (VAE) [16], [17], Neural Networks with Random Weights(NNRW) [18], Denoising Autoencoders (DAE) [19] and Generative Adversarial Networks(GAN) [20], [21].…”
Section: Introductionmentioning
confidence: 99%
“…Discriminative methods include Multiple Imputation by Chain Equations (MICE) [13], Random Forest-based Imputation (Missforest) [14] and matrix completion [15]. Generative methods consist mostly of techniques based on Deep Learning (DL) e.g Denoising Autoencoders (DAE) [2] [16] and Generative Adversarial Networks(GAN) [17] [18]. GAN learns the latent distribution of a dataset and can generate real samples from a random noise.…”
Section: Introductionmentioning
confidence: 99%