2020
DOI: 10.48550/arxiv.2006.05534
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Novelty Detection via Robust Variational Autoencoding

Abstract: We propose a new method for novelty detection that can tolerate nontrivial corruption of the training points. Previous works assumed either no or very low corruption. Our method trains a robust variational autoencoder (VAE), which aims to generate a model for the uncorrupted training points. To gain robustness to corruption, we incorporate three changes to the common VAE: 1. Modeling the latent distribution as a mixture of Gaussian inliers and outliers, while using only the inlier component when testing; 2. Ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 53 publications
0
1
0
Order By: Relevance
“…Thanks to the marvelous representation ability of DNNs, many reconstruction-based methods using DNNs are developed for UAD in recent years. They mainly employ deep generative models (Zenati et al, 2018;Perera et al, 2019;Lai et al, 2020b) or autoencoders (AE) (Chen et al, 2017;Pidhorskyi et al, 2018;Abati et al, 2019) to reconstruct data and determine abnormality of data via its reconstruction error. For instance, DAGMM (Zong et al, 2018) feed the latent representations of the AE into a gaussian mixture model and jointly optimize them.…”
Section: Related Workmentioning
confidence: 99%
“…Thanks to the marvelous representation ability of DNNs, many reconstruction-based methods using DNNs are developed for UAD in recent years. They mainly employ deep generative models (Zenati et al, 2018;Perera et al, 2019;Lai et al, 2020b) or autoencoders (AE) (Chen et al, 2017;Pidhorskyi et al, 2018;Abati et al, 2019) to reconstruct data and determine abnormality of data via its reconstruction error. For instance, DAGMM (Zong et al, 2018) feed the latent representations of the AE into a gaussian mixture model and jointly optimize them.…”
Section: Related Workmentioning
confidence: 99%