2018
DOI: 10.14311/nnw.2018.28.008
|View full text |Cite
|
Sign up to set email alerts
|

SPARSE REPRESENTATION LEARNING OF DATA BY AUTOENCODERS WITH <em>L</em><sub>1/2</sub> REGULARIZATION

Abstract: Autoencoder networks have been demonstrated to be efficient for unsupervised learning of representation of images, documents and time series. Sparse representation can improve the interpretability of the input data and the generalization of a model by eliminating redundant features and extracting the latent structure of data. In this paper, we use L 1/2 regularization method to enforce sparsity on the hidden representation of an autoencoder for achieving sparse representation of data. The performance of our ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…based on the inverse of the expected Q and E[P] using the expected P. These expectations are calculated using specific formulas involving the covariance matrix of the uncorrupted data X. (Li et al 2018) is a modified version of the Robust Autoencoder (RAE) designed to enhance the autoencoder's resilience when dealing with noisy or corrupted input data. This enhancement is achieved through the use of a specific type of regularization known as L 2,1 regularization.…”
Section: Marginalized Denoising Autoencodermentioning
confidence: 99%
“…based on the inverse of the expected Q and E[P] using the expected P. These expectations are calculated using specific formulas involving the covariance matrix of the uncorrupted data X. (Li et al 2018) is a modified version of the Robust Autoencoder (RAE) designed to enhance the autoencoder's resilience when dealing with noisy or corrupted input data. This enhancement is achieved through the use of a specific type of regularization known as L 2,1 regularization.…”
Section: Marginalized Denoising Autoencodermentioning
confidence: 99%
“…From this idea, a fault detection model can be created with the following steps: By following these steps and using two most popular techniques for optimizing autoencoder (denoising Auto-Encoder [54], dAE, and sparse Auto-Encoder [55], sAE), four encoder models were created: dAE + OCSVM (dAESVM), sAE + OCSVM (sAESVM), dAE + iForest (dAEForest) and sAE + iForest (sAEForest). In addition to the detection function hyperparameters, CVRS also tunes the hyperparameters of these autoencoder-based models listed in Table 3 with their probability distributions.…”
Section: Deep Learningmentioning
confidence: 99%
“…Lastly, in generative models our approach is relevant for reconstructing images from their parts similar to [9]. Since we are trying to create a model where missing features are defining a class, we could also provide what features are missing and also where.…”
Section: Future Workmentioning
confidence: 99%