2021
DOI: 10.3390/app11199290
|View full text |Cite
|
Sign up to set email alerts
|

Anomaly Detection of the Brake Operating Unit on Metro Vehicles Using a One-Class LSTM Autoencoder

Abstract: Detecting anomalies in the Brake Operating Unit (BOU) braking system of metro trains is very important for trains’ reliability and safety. However, current periodic maintenance and inspection cannot detect anomalies at an early stage. In addition, constructing a stable and accurate anomaly detection system is a very challenging task. Hence, in this work, we propose a method for detecting anomalies of BOU on metro vehicles using a one-class long short-term memory (LSTM) autoencoder. First, we extracted brake cy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(13 citation statements)
references
References 35 publications
(36 reference statements)
0
7
0
Order By: Relevance
“…In this paper, we use the sample data of web application transgression scenarios and train the LSTM-AutoEncoder model to identify the pages of transgression scenarios, which solves the problem of false positives in traditional transgression detection and improves the accuracy of transgression detection. The experimental data show that the LSTM-AutoEncoder model has certain accuracy advantages over the traditional one-class SVM model and AutoEncoder model under certain amount of data scale and also has great advantages in processing web text sequences with contextual relationships, which is as follows [ 31 ]: In the one-class model, the model reaches a very high precision of 0.974 and has a high recognition rate for unauthorized scene pages in the dataset. However, the recall rate of the model is only 0.473, which makes the non-unauthorized scene web page be recognized by the model as an unauthorized scene page.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…In this paper, we use the sample data of web application transgression scenarios and train the LSTM-AutoEncoder model to identify the pages of transgression scenarios, which solves the problem of false positives in traditional transgression detection and improves the accuracy of transgression detection. The experimental data show that the LSTM-AutoEncoder model has certain accuracy advantages over the traditional one-class SVM model and AutoEncoder model under certain amount of data scale and also has great advantages in processing web text sequences with contextual relationships, which is as follows [ 31 ]: In the one-class model, the model reaches a very high precision of 0.974 and has a high recognition rate for unauthorized scene pages in the dataset. However, the recall rate of the model is only 0.473, which makes the non-unauthorized scene web page be recognized by the model as an unauthorized scene page.…”
Section: Discussionmentioning
confidence: 99%
“…LSTM-AutoEncoder model has certain accuracy advantages over the traditional one-class SVM model and AutoEncoder model under certain amount of data scale and also has great advantages in processing web text sequences with contextual relationships, which is as follows [31]: (1) e application of AI and network security is at a relatively early stage of development, and this paper provides a good application case. However, the experimental data in this paper also have certain shortcomings.…”
Section: Data Availabilitymentioning
confidence: 98%
“…An unlabeled dataset can be framed as a supervised learning problem to produce an output that represents the original input . This network can be trained by decreasing the reconstruction error (x, ), which gauges the discrepancies between an initial information (input sequence) and the resulting reconstruction sequence [ 51 , 52 , 53 ].…”
Section: Theoretical Backgroundsmentioning
confidence: 99%
“…Anomaly detection using an autoencoder learns important characteristics of normal samples by learning under the assumption that most of the data used for training are normal samples. In addition, the loss function-based threshold is calculated [ 52 , 53 , 54 , 55 ], and if there is a loss function value that exceeds the threshold, the corresponding data are determined as abnormal data: where is input data and is reconstruct input data.…”
Section: Introductionmentioning
confidence: 99%