2016
DOI: 10.48550/arxiv.1602.07109
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Variational Inference for On-line Anomaly Detection in High-Dimensional Time Series

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(19 citation statements)
references
References 0 publications
0
19
0
Order By: Relevance
“…Alternatively, researchers seek to use a deterministic Recurrent Neural Network (RNN) [29] to produce hidden states, then the latent variable is conditioned on the RNN hidden states and the previous latent variables, such that the temporal structure is captured. The resulting models include VRNN [16], STORN [30], which can be viewed as sequential versions of VAE. However, these models essentially hardly consider the presence of anomalies in their objective functions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Alternatively, researchers seek to use a deterministic Recurrent Neural Network (RNN) [29] to produce hidden states, then the latent variable is conditioned on the RNN hidden states and the previous latent variables, such that the temporal structure is captured. The resulting models include VRNN [16], STORN [30], which can be viewed as sequential versions of VAE. However, these models essentially hardly consider the presence of anomalies in their objective functions.…”
Section: Related Workmentioning
confidence: 99%
“…• STORN [30]. As a sequential version of VAE, the model is based on stochastic recurrent neural network (STORN).…”
Section: A Protocols and Settingsmentioning
confidence: 99%
“…To detect outliers w.r.t. the target task, such approaches typically rely on some sort of confidence score to decide on the reliability of the prediction, which is either produced by modifying the model and/or training procedure, or computed/extracted post-hoc from the model and/or predictions (An & Cho, 2015;Sölch et al, 2016;Hendrycks & Gimpel, 2016;Liang et al, 2017;Hendrycks et al, 2018;Shafaei et al, 2018;DeVries & Taylor, 2018;Sricharan & Srivastava, 2018;Ahmed & Courville, 2019). Alternatively, some methods use predictive uncertainty estimates for OoD detection (Gal & Ghahramani, 2016;Lakshminarayanan et al, 2017;Malinin & Gales, 2018;Osawa et al, 2019;Ovadia et al, 2019) (cf.…”
Section: Related Workmentioning
confidence: 99%
“…There are numerous papers describing the use of VAE for anomaly detection - [19], [22], [6], but none make a more complete comparison with other generative and classical methods and only some use non-image datasets. In [1], the authors describe the advantages of VAE over AE -it generalizes more easily since it is working on probabilities.…”
Section: Variational Autoencodermentioning
confidence: 99%