2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP) 2017
DOI: 10.1109/mlsp.2017.8168163
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial learning: A critical review and active learning study

Abstract: This papers consists of two parts. The first is a critical review of prior art on adversarial learning, identifying some significant limitations of previous works. The second part is an experimental study considering adversarial active learning and an investigation of the efficacy of a mixed sample selection strategy for combating an adversary who attempts to disrupt the classifier learning.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…How to identify adversarial samples in unsupervised and weakly supervised scenarios needs to receive more attention. Clustering techniques and active learning techniques also need to be robustified facing adversaries (e.g., Bayer, Comparetti, Hlauschek, Kruegel, & Kirda, 2009; Lin, Ke, & Tsai, 2015; Miller, Hu, Qiu, & Kesidis, 2017; Pi, Lu, Sagduyu, & Chen, 2016; Zhou, Kantarcioglu, & Xi, 2019). Meanwhile, it is important to quantify the robustness and accuracy trade‐off for machine learning algorithms facing adversarial attacks.…”
Section: Discussionmentioning
confidence: 99%
“…How to identify adversarial samples in unsupervised and weakly supervised scenarios needs to receive more attention. Clustering techniques and active learning techniques also need to be robustified facing adversaries (e.g., Bayer, Comparetti, Hlauschek, Kruegel, & Kirda, 2009; Lin, Ke, & Tsai, 2015; Miller, Hu, Qiu, & Kesidis, 2017; Pi, Lu, Sagduyu, & Chen, 2016; Zhou, Kantarcioglu, & Xi, 2019). Meanwhile, it is important to quantify the robustness and accuracy trade‐off for machine learning algorithms facing adversarial attacks.…”
Section: Discussionmentioning
confidence: 99%
“…Defense against DP attacks applied to active learning has also been considered [63]. Active learning systems efficiently grow the number of labeled training samples by selecting, for labeling by an "oracle" (e.g., human expert), the samples from an unlabeled batch whose labelings are expected to improve the classifier's accuracy the most.…”
Section: Defenses Against Classifier-degrading Dp Attacksmentioning
confidence: 99%
“…the most actionable category, with confirmation by the oracle. Thus, [63] proposed a mixed sample selection strategy, randomly selecting from multiple criteria at each oracle labeling step. This approach reduces the frequency with which an adversarial sample is selected for labeling and thus mitigates accuracy degradation 4 The oracle is usually assumed to provide accurate, ground-truth labels for samples actively selected for labeling.…”
Section: Defenses Against Classifier-degrading Dp Attacksmentioning
confidence: 99%
See 1 more Smart Citation
“…Interest in adversarial learning has grown dramatically in recent years, with some works focused on devising attacks against machine learning systems, e.g., [1,2], and others devising defenses, e.g., [3,4,11]. In this work, we address data poisoning attacks on generative classifiers, with particular focus on naive Bayes spam filters.…”
Section: Introductionmentioning
confidence: 99%