2021 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) 2021
DOI: 10.1109/dsn-w52860.2021.00031
|View full text |Cite
|
Sign up to set email alerts
|

Network Intrusion Detection Based on Active Semi-supervised Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…To showcase all scenarios envisioned in our CEF-SsL framework, we consider 9 SsL methods which are variations of two established SsL methods: self learning via pseudo-labelling (e.g., [65]) and active learning via uncertainty sampling (e.g., [66]), summarized in §2.3. Specifically, we consider 3 'pure' pseudo-labelling methods, 3 'pure' active learning methods, and 3 combinations thereof (e.g., [70]), where we cascade pseudo-labelling with active learning. The decision criterion is the confidence threshold c. Pseudo Labelling.…”
Section: Selected Ssl Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To showcase all scenarios envisioned in our CEF-SsL framework, we consider 9 SsL methods which are variations of two established SsL methods: self learning via pseudo-labelling (e.g., [65]) and active learning via uncertainty sampling (e.g., [66]), summarized in §2.3. Specifically, we consider 3 'pure' pseudo-labelling methods, 3 'pure' active learning methods, and 3 combinations thereof (e.g., [70]), where we cascade pseudo-labelling with active learning. The decision criterion is the confidence threshold c. Pseudo Labelling.…”
Section: Selected Ssl Methodsmentioning
confidence: 99%
“…The analysis is focused on 'suggesting' to the oracle which samples in U should be correctly labelled to improve the performance, and the suggestion is based on the confidence of the model on the samples in U. Intuitively, the model can learn 'more' from samples with a low confidence [70]. The oracle then assigns the correct ground truth to such samples, which are inserted into L and used to retrain the model-using a correctly labelled dataset.…”
Section: Focus Of the Papermentioning
confidence: 99%
“…Among these, we mention semisupervised ML approaches (e.g. [28,45]), which combine unlabelled with labelled data, and are hence orthogonal to our work.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, the adopted splits s(N ) and s(M ) are always 80:20 for both T and E. We use such splits because they are common in related literature (e.g., [17,45]), therefore enabling a more fair comparison of our results with those of past works. We considered different ML algorithms, but we found that Random Forests consistently provided the best tradeoff in terms of detection performance, rate of false alarms, and training time-a result that confirms the state-of-the-art on the same datasets (e.g., [17,19,21,45]). Hence our results will refer to Random Forest as the learning algorithm for each classifier.…”
Section: Assessmentmentioning
confidence: 99%
See 1 more Smart Citation