2019
DOI: 10.48550/arxiv.1911.08744
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Log Message Anomaly Detection and Classification Using Auto-B/LSTM and Auto-GRU

Abstract: Log messages are now widely used in software systems. They are important for classification as millions of logs are generated each day. Most logs are unstructured which makes classification a challenge. In this paper, Deep Learning (DL) methods called Auto-LSTM, Auto-BLSTM and Auto-GRU are developed for anomaly detection and log classification. These models are used to convert unstructured log data to trained features which is suitable for classification algorithms. They are evaluated using four data sets, nam… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 28 publications
(31 reference statements)
0
4
0
Order By: Relevance
“…Some approaches make use of Bi-LSTM RNNs, which are basically two independent LSTM RNNs that work in parallel and process sequences in opposite directions, i.e., while one LSTM RNN processes the input sequences as usual from the first to the last element, the other LSTM RNN processes sequence elements starting from the last entry and predicts elements that chronologically precede them. Experiments suggest that Bi-LSTM RNNs outperform LSTM RNNs [17], [34], [46], [50], [55], [57], [72], [75]. Another popular choice for RNNs are Gated Recurrent Units (GRU) that simplify the cell architecture as they only rely on update and reset gates.…”
Section: B Deep Learning Techniquesmentioning
confidence: 99%
See 3 more Smart Citations
“…Some approaches make use of Bi-LSTM RNNs, which are basically two independent LSTM RNNs that work in parallel and process sequences in opposite directions, i.e., while one LSTM RNN processes the input sequences as usual from the first to the last element, the other LSTM RNN processes sequence elements starting from the last entry and predicts elements that chronologically precede them. Experiments suggest that Bi-LSTM RNNs outperform LSTM RNNs [17], [34], [46], [50], [55], [57], [72], [75]. Another popular choice for RNNs are Gated Recurrent Units (GRU) that simplify the cell architecture as they only rely on update and reset gates.…”
Section: B Deep Learning Techniquesmentioning
confidence: 99%
“…Another popular choice for RNNs are Gated Recurrent Units (GRU) that simplify the cell architecture as they only rely on update and reset gates. One of the main benefits of GRUs is that they are computationally more efficient than LSTM RNNs, which is a relevant aspect for use cases focusing on edge devices [21], [34], [35], [37], [53], [56], [62], [68], [69].…”
Section: B Deep Learning Techniquesmentioning
confidence: 99%
See 2 more Smart Citations