ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2021
DOI: 10.1109/icassp39728.2021.9414410
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Sparse Generative Non-Parametric Supervised Autoencoder

Abstract: This paper concerns the supervised generative non parametric autoencoder. Classical methods are based on variational non supervised autoencoders (VAE). Variational autoencoders encourage the latent space to fit a prior distribution, like a Gaussian. However, they tend to draw stronger assumptions for the data, often leading to higher asymptotic bias when the model is wrong.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2
1

Relationship

3
3

Authors

Journals

citations
Cited by 9 publications
(24 citation statements)
references
References 18 publications
0
24
0
Order By: Relevance
“…The goal is to compute the weights W minimizing the total loss, which depends on both the classification loss and the reconstruction loss. Thus, we propose to minimize the following criterion to compute the weights W of the autoencoder (see [17] for details).…”
Section: A New Supervised Autoencoder (Sae) Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…The goal is to compute the weights W minimizing the total loss, which depends on both the classification loss and the reconstruction loss. Thus, we propose to minimize the following criterion to compute the weights W of the autoencoder (see [17] for details).…”
Section: A New Supervised Autoencoder (Sae) Frameworkmentioning
confidence: 99%
“…In this work, we relaxed the parametric distribution assumption on the latent space to learn a non-parametric data distribution of clusters [17]. Our network encourages the latent space to fit a distribution learned with the clustering labels rather than a parametric prior.…”
mentioning
confidence: 99%
“…The goal is to compute the weights W minimizing the total loss, which depends on both the classification loss and the reconstruction loss. Thus, we propose to minimize the following criterion to compute the weights W of the autoencoder (see [25] for details).…”
Section: Criterionmentioning
confidence: 99%
“…Moreover the "group Lasso ℓ 2,1 constraint" induces small sparsity [39] and the ℓ 1 constraint induces unstructured sparsity [40,41]. Thus we used ℓ 1,1 constrained regularization penalty W 1 1 ≤ η for feature selection [25].…”
Section: Structured Constrains Sparsity and Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation