2022
DOI: 10.1109/access.2022.3140342
|View full text |Cite
|
Sign up to set email alerts
|

A Deep Bidirectional LSTM-GRU Network Model for Automated Ciphertext Classification

Abstract: Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU) are a class of Recurrent Neural Networks (RNN) suitable for sequential data processing. Bidirectional LSTM (BLSTM) enables a better understanding of context by learning the future time steps in a bidirectional manner. Moreover, GRU deploys reset and update gates in the hidden layer, which is computationally more efficient than a conventional LSTM. This paper proposes an efficient network model based on deep BLSTM-GRU for ciphertext classification ai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 33 publications
(12 citation statements)
references
References 35 publications
0
7
0
Order By: Relevance
“…A BiLSTM [ 42 ] model contains two LSTM models where one LSTM takes the input information sequence in the forward direction (past to future) and the other LSTM in the backward direction (future to past). The BiLSTM input flow in both directions is used to preserve the future and past sequence information.…”
Section: Methodsmentioning
confidence: 99%
“…A BiLSTM [ 42 ] model contains two LSTM models where one LSTM takes the input information sequence in the forward direction (past to future) and the other LSTM in the backward direction (future to past). The BiLSTM input flow in both directions is used to preserve the future and past sequence information.…”
Section: Methodsmentioning
confidence: 99%
“…In this study, two different deep learning models based on GRU and LSTM, which make productivity estimation on wheat data of Konya province, are proposed to meet the stated requirement. Since the two proposed deep learning models are based on RNN [41][42][43][44], both the performance and training times of the models were compared. As a result of the comparison processes, it is seen that the results of the LSTM model are slightly better than the GRU model.…”
Section: Conclusion and Discussionmentioning
confidence: 99%
“…Ensuring the dataset is preprocessed correctly can lead to more accurate and efficient training, ultimately resulting in a more robust and reliable model [48]. By normalizing the features, handling categorical variables, selecting relevant features, and generating input sequences, we ensure that the LSTM model can effectively capture the underlying patterns and relationships in the data [49]. This, in turn, allows the model to generalize well to unseen data, providing a high intrusion detection accuracy and minimizing false alarms.…”
Section: Feature Extraction and Preprocessingmentioning
confidence: 99%