2019
DOI: 10.1007/978-3-030-30241-2_48
|View full text |Cite
|
Sign up to set email alerts
|

LSTM-Based Anomaly Detection: Detection Rules from Extreme Value Theory

Abstract: In this paper, we explore various statistical techniques for anomaly detection in conjunction with the popular Long Short-Term Memory (LSTM) deep learning model for transportation networks. We obtain the prediction errors from an LSTM model, and then apply three statistical models based on (i) the Gaussian distribution, (ii) Extreme Value Theory (EVT), and (iii) the Tukey's method. Using statistical tests and numerical studies, we find strong evidence against the widely employed Gaussian distribution based det… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 18 publications
(21 reference statements)
0
4
0
Order By: Relevance
“…Further, in our recent work [24], after comparing different detection strategies for hybrid deep anomaly detection, we noticed the potential of a strategy based on extreme values. We found that an EVT-based detection rule performed better than other popular detection techniques.…”
Section: End-to-end Deep Anomaly Detectionmentioning
confidence: 86%
See 1 more Smart Citation
“…Further, in our recent work [24], after comparing different detection strategies for hybrid deep anomaly detection, we noticed the potential of a strategy based on extreme values. We found that an EVT-based detection rule performed better than other popular detection techniques.…”
Section: End-to-end Deep Anomaly Detectionmentioning
confidence: 86%
“…We conclude our work in Section VII. 1 A part of this work has been presented as a conference paper [24].…”
Section: B Our Contributionsmentioning
confidence: 99%
“…The size of sliding window n, n < M, where M is size of data. The m*k inputs are passed to LSTM network (Vega García & Aznarte, 2020, Davis et al, 2019) to predict the next value, say x(1). Initially, window contains {x1, x2,…, xm}inputs are feed to LSTM network and it predicts xm+1, at the second step {x2, x3,…, xm+1} are fed to network to get xm+2 as the prediction value.…”
Section: Multi-variate Time Series Predictionmentioning
confidence: 99%
“…The various existing works are compared to the proposed model to evaluate the performance of the model. Machine learning models for multi-step time series prediction includes LSTM model [21], LSTM autoencoder model (Davis et al, 2019). Further, to compare the proposed model with the existing model, the parameters computational power, resource consumption, the delay, CPU, and energy consumption are used.…”
Section: Accuracy Tp Tn Tp Tn Fp Fnmentioning
confidence: 99%