2020
DOI: 10.1109/access.2020.3010706
|View full text |Cite
|
Sign up to set email alerts
|

Detecting Malware Code as Video With Compressed, Time-Distributed Neural Networks

Abstract: Malware is an ever-present problem in the modern era and while detecting malware with AI has grown as a new field of exploration, current methods are not yet mature enough for widespread adoption in terms of speed and performance. Current methods largely focus on viewing malicious assembly as an image for detection, requiring a large amount of preprocessing and making network architectures inflexible. Preprocessing malware images to one size introduces additional time to predict and makes the task of predictio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…In Reference 20, a new methodology for converting executable bytecode into a video, instead of image, is employed to classify alongside deep, time‐distributed neural networks that attain till 98.74% testing accuracy upon nine classes of malware, and till 99.36% testing accuracy upon balanced array of malicious versus benign files. Network can as well as classify the entire malware in the considered dataset for false‐positive rate of 13% and is noticed victorious too in classifying exclusively sections of an input besides early victory in a 0‐day situation.…”
Section: Related Studiesmentioning
confidence: 99%
“…In Reference 20, a new methodology for converting executable bytecode into a video, instead of image, is employed to classify alongside deep, time‐distributed neural networks that attain till 98.74% testing accuracy upon nine classes of malware, and till 99.36% testing accuracy upon balanced array of malicious versus benign files. Network can as well as classify the entire malware in the considered dataset for false‐positive rate of 13% and is noticed victorious too in classifying exclusively sections of an input besides early victory in a 0‐day situation.…”
Section: Related Studiesmentioning
confidence: 99%
“…Tackling Non-Unique Redundancy Regularizing or pruning via similarity is a well-explored topic (Ayinde et al, 2019;Zhu et al, 2018;Srinivas & Babu, 2015;Santacroce et al, 2020). However, our approach integrate more cleanly with movement to insulate weights from regularization, with only a small increase in training time as listed in Appendix A.…”
Section: Globally Unique Movementmentioning
confidence: 99%
“…Consequently, after division of the input into windows, each window is subjected to the same embeddings, special dropout, Bidirectional LSTM, and dense operations (Santacroce et al 2020). The rationale behind using a TimeDistributed output layer is to perform classi cation in each time step, enabling us to predict the error types of many words in an input sentence.…”
Section: Dense Layermentioning
confidence: 99%