2018
DOI: 10.48550/arxiv.1806.02256
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Regression with Multiple Learners

Abstract: Despite the considerable success enjoyed by machine learning techniques in practice, numerous studies demonstrated that many approaches are vulnerable to attacks. An important class of such attacks involves adversaries changing features at test time to cause incorrect predictions. Previous investigations of this problem pit a single learner against an adversary. However, in many situations an adversary's decision is aimed at a collection of learners, rather than specifically targeted at each independently. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 14 publications
(17 reference statements)
0
3
0
Order By: Relevance
“…An adversarial perturbation, denoted as ϵ, when added in a specific direction to an input, can mislead the model into making erroneous predictions. This phenomenon is observed in both classification [9], [10] and regression tasks [11], [12].…”
mentioning
confidence: 83%
“…An adversarial perturbation, denoted as ϵ, when added in a specific direction to an input, can mislead the model into making erroneous predictions. This phenomenon is observed in both classification [9], [10] and regression tasks [11], [12].…”
mentioning
confidence: 83%
“…Tong et al [27] consider an attacker that perturbs data samples during the test phase to induce incorrect predictions. The authors use a single attacker, multiple learner framework modeled as a multi-learner Stackelberg game.…”
Section: Adversarial Regressionmentioning
confidence: 99%
“…We also considered the optimization based attack by Tong et al [27], but it was unable to exact a noticeable increase in MSE even at a poisoning rate of 20%. Therefore we do not present experimental results for it.…”
Section: Threat Modelsmentioning
confidence: 99%