Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2019
DOI: 10.1145/3292500.3330966
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Variational Embedding for Robust Semi-supervised Learning

Abstract: Semi-supervised learning is sought for leveraging the unlabelled data when labelled data is difficult or expensive to acquire. Deep generative models (e.g., Variational Autoencoder (VAE)) and semisupervised Generative Adversarial Networks (GANs) have recently shown promising performance in semi-supervised classification for the excellent discriminative representing ability. However, the latent code learned by the traditional VAE is not exclusive (repeatable) for a specific input sample, which prevents it from … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 38 publications
(19 citation statements)
references
References 28 publications
(45 reference statements)
0
19
0
Order By: Relevance
“…This is important to remember that since this work uses profound networks, this relies on marks to ensure it is not unattended. Zhang et al [101] proposed the use for activity recognition of semi-supervised GAN. Unlike standard GANs, the discriminator allows a K + 1 classification in the semisupervised GAN classification, which involves operation and false recognition of results.…”
Section: B Labelling Scarcitymentioning
confidence: 99%
“…This is important to remember that since this work uses profound networks, this relies on marks to ensure it is not unattended. Zhang et al [101] proposed the use for activity recognition of semi-supervised GAN. Unlike standard GANs, the discriminator allows a K + 1 classification in the semisupervised GAN classification, which involves operation and false recognition of results.…”
Section: B Labelling Scarcitymentioning
confidence: 99%
“…As pointed out by Han et al [103], it is often reasonable to assume that the input-output mapping is similar across different models, so a better NN performance may be obtained by fitting all the parameters at the same time. Lastly, since poor generalization ability still limits the broader use of BCI, thus deep learning could be employed in the form of, e.g., autoencoders without manual feature selection [32,35]. In the case of training neural networks, it is required for the inputs and outputs of the network to be encoded as vectors of numbers.…”
Section: Discussionmentioning
confidence: 99%
“…Such a reduction enables to distinguish signals representing different types of mental activity that the BCI system is to recognize [10,30]. However, in deep learning classification, feature extraction is not always applied as signal characteristics may be automatically derived from autoencoders [31,32]. Moreover, Wu et al proposed an experimental scenario in which the feature selection and classification were performed simultaneously [33].…”
Section: Introductionmentioning
confidence: 99%
“…Interestingly, other applications of adversarial variational training frameworks have been reported. For example, Zhang et al [24] proposed a semi-supervised learning Adversarial Variational Embedding for leveraging both the power of GAN as a high quality generative model and Variational Au-toEncoder (VAE) as a posterior distribution learner. They demonstrated that the combination of VAE and GAN provided significant improvements of semisupervised classification.…”
Section: Related Workmentioning
confidence: 99%