2019
DOI: 10.1109/access.2019.2908225
|View full text |Cite
|
Sign up to set email alerts
|

$Deep-Full-Range$ : A Deep Learning Based Network Encrypted Traffic Classification and Intrusion Detection Framework

Abstract: With the rapid evolution of network traffic diversity, the understanding of network traffic has become more pivotal and more formidable. Previously, traffic classification and intrusion detection require a burdensome analyzing of various traffic features and attack-related characteristics by experts, and even, private information might be required. However, due to the outdated features labeling and privacy protocols, the existing approaches may not fit with the characteristics of the changing network environme… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
82
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 185 publications
(83 citation statements)
references
References 13 publications
0
82
0
1
Order By: Relevance
“…Combining various payload analysis techniques can achieve comprehensive content information, which is able to improve the effect of the IDS. Zeng et al [44] proposed a payload detection method with multiple deep learning models. They adopted three deep learning models (a CNN, an LSTM, and a stacked autoencoder) to extract features from different points of view.…”
Section: Payload Analysis-based Detectionmentioning
confidence: 99%
“…Combining various payload analysis techniques can achieve comprehensive content information, which is able to improve the effect of the IDS. Zeng et al [44] proposed a payload detection method with multiple deep learning models. They adopted three deep learning models (a CNN, an LSTM, and a stacked autoencoder) to extract features from different points of view.…”
Section: Payload Analysis-based Detectionmentioning
confidence: 99%
“…Stacked autoencoders are also considered in combination with traditional classifiers (e.g., SVM, K-NN, Gaussian Naive-Bayes) [29]. In addition, Zeng et al [30] adopt stacked autoencoders, in which the compressed output of an autoencoder is used as the input of the autoencoder in the next layer.…”
Section: Related Workmentioning
confidence: 99%
“…Although ensemble learning [32,33] showed good performance, the model based on a weak learner lacked interpretability. Neural networks [34][35][36][37][38][39][40][41][42][43] needed a large amount of data to train a model, which was difficultly to realize in a condition of a small training set. Transfer learning [44] and active learning [45] addressed the issues of model practicability and insufficient label data during training, respectively, but they have not been adequately explored in studies on fine-grained classification.…”
Section: Related Workmentioning
confidence: 99%