2022
DOI: 10.3390/app12199994
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Model with Sequential Features for Malware Classification

Abstract: Currently, malware shows an explosive growth trend. Demand for classifying malware is also increasing. The problem is the low accuracy of both malware detection and classification. From the static features of malicious families, a new deep learning method of TCN-BiGRU was proposed in this study, which combined temporal convolutional network (TCN) and bidirectional gated recurrent unit (BiGRU). First, we extracted the features of malware assembly code sequences and byte code sequences. Second, we shortened the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…For the Malware identification phase, STCN is compared with two traditional statistical learning methods, that is, SVM and LR and some other advanced CE‐optimised neural networks, that is, long short‐term memory (LSTM), gated recurrent unit (GRU), TCN, and TCN with bidirectional GRU (TCN‐BiGRU). Hence, the specific benchmarks in this study include unigram‐based SVM, unigram‐based LR, Word2Vec‐based GRU, Word2Vec‐based LSTM [24], Word2Vec‐based TCN [18], and Word2Vec‐based TCN‐BiGRU [25]. In this study, the experiments are conducted on an NVIDIA GPU and implemented by Python with several extended libraries.…”
Section: Resultsmentioning
confidence: 99%
“…For the Malware identification phase, STCN is compared with two traditional statistical learning methods, that is, SVM and LR and some other advanced CE‐optimised neural networks, that is, long short‐term memory (LSTM), gated recurrent unit (GRU), TCN, and TCN with bidirectional GRU (TCN‐BiGRU). Hence, the specific benchmarks in this study include unigram‐based SVM, unigram‐based LR, Word2Vec‐based GRU, Word2Vec‐based LSTM [24], Word2Vec‐based TCN [18], and Word2Vec‐based TCN‐BiGRU [25]. In this study, the experiments are conducted on an NVIDIA GPU and implemented by Python with several extended libraries.…”
Section: Resultsmentioning
confidence: 99%
“…In order to evaluate the performance of the classification model, the model was compared with the model method proposed in [8,[37][38][39][40][41][42][43][44][45][46]. The data sets used in these documents are the same as those in this paper.…”
Section: Comparison Of Different Classification Algorithmsmentioning
confidence: 99%