2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W) 2022
DOI: 10.1109/dsn-w54100.2022.00034
|View full text |Cite
|
Sign up to set email alerts
|

A Two-Layer Soft-Voting Ensemble Learning Model For Network Intrusion Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 19 publications
0
0
0
Order By: Relevance
“…Ensemble Learning [27][28][29][30][31][32][33] solves problems by training multiple learners and combining them. Ensemble is better at generalization than weak learners, and can turn weak learners that are only slightly better than random guess into strong learners with accurate prediction.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Ensemble Learning [27][28][29][30][31][32][33] solves problems by training multiple learners and combining them. Ensemble is better at generalization than weak learners, and can turn weak learners that are only slightly better than random guess into strong learners with accurate prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Saba [32] proposed a two-stage hybrid method, selected appropriate features using the genetic algorithm, employed an ensemble classifier, and applied SVM and decision tree to mark the attack as malicious or normal. Yao [33] proposed a two-layer soft-voting ensemble learning model with RF, lightGBM and XGBoost as base classifiers, and used the adversarial validate algorithm to test the consistency of the data distribution in training and testing dataset to determine whether the dataset needs re-splitting. The results showed that the model has a higher accuracy rate in both binary and multi-classification than other One-Class Classification models.…”
Section: Related Workmentioning
confidence: 99%
“…The work by Yao et al [165] addresses the challenges of data imbalance and low accuracy. To tackle this, the authors employ an ensemble with the classifiers XGBoost, LightGBM, and Random Forest on the UNSW-NB15 dataset.…”
Section: H Work Published In 2022mentioning
confidence: 99%