2018 IEEE International Conference on Communications (ICC) 2018
DOI: 10.1109/icc.2018.8422328
|View full text |Cite
|
Sign up to set email alerts
|

Chronic Poisoning against Machine Learning Based IDSs Using Edge Pattern Detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(16 citation statements)
references
References 11 publications
0
15
0
1
Order By: Relevance
“…The dataset can also be periodically updated to better resemble the modifications of the network environment, or to include malicious samples of novel attack variants. The updated version will then be used to re-train the model accordingly [54].…”
Section: Analysis and Classificationmentioning
confidence: 99%
“…The dataset can also be periodically updated to better resemble the modifications of the network environment, or to include malicious samples of novel attack variants. The updated version will then be used to re-train the model accordingly [54].…”
Section: Analysis and Classificationmentioning
confidence: 99%
“…This method is simple and effective [14], however, it focus on binary classification problems, which is not generic to other learning algorithms. Li et al [15] use the Edge Pattern Detection (EPD) algorithm to design poisoning attack, named chronic poisoning attack, on machine learning based intrusion detection systems (IDS). The method can poison several learning algorithms including SVM, LR and NB [15].…”
Section: ) Poisoning Attacks Targeting Non-nn Modelsmentioning
confidence: 99%
“…Li et al [15] use the Edge Pattern Detection (EPD) algorithm to design poisoning attack, named chronic poisoning attack, on machine learning based intrusion detection systems (IDS). The method can poison several learning algorithms including SVM, LR and NB [15]. However, the method in [15] uses a long-term slow poisoning procedure and is complicated to implement.…”
Section: ) Poisoning Attacks Targeting Non-nn Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…As artificial intelligence is becoming a national strategy in more and more countries, machine learning, which can be divided into supervised learning [1], Semi-supervised learning [2] and unsupervised learning [3], is concerned as the most important method in data science. But machine learning methods are vulnerable to a variety of disturbances [4], such as poisoning [5]- [7], evasion [8], and impersonation [9]. Most of these disturbances introduce adversarial samples, which leverage the vulnerability of machine learning models to achieve malicious goals.…”
Section: Introductionmentioning
confidence: 99%