2020
DOI: 10.1587/transinf.2019inp0002
|View full text |Cite
|
Sign up to set email alerts
|

Simple Black-Box Adversarial Examples Generation with Very Few Queries

Abstract: Research on adversarial examples for machine learning has received much attention in recent years. Most of previous approaches are white-box attacks; this means the attacker needs to obtain beforehand internal parameters of a target classifier to generate adversarial examples for it. This condition is hard to satisfy in practice. There is also research on black-box attacks, in which the attacker can only obtain partial information about target classifiers; however, it seems we can prevent these attacks, since … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 20 publications
0
1
0
Order By: Relevance
“…Reinforcement learning is often associated with adversarial machine learning in different contexts and goals. These mechanisms can be a target of adversarial attacks [74], but also as a means to conceive attacks [4], [13], [40], [41], [62]- [64], and as a countermeasure to Table XII: Performance comparison against evasion attacks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Reinforcement learning is often associated with adversarial machine learning in different contexts and goals. These mechanisms can be a target of adversarial attacks [74], but also as a means to conceive attacks [4], [13], [40], [41], [62]- [64], and as a countermeasure to Table XII: Performance comparison against evasion attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Neglecting this characteristic is unrealistic because attackers can only perform limited amounts of queries if they want to avoid detection. For example, the methods presented in [62] and [66] allow their agents to submit hundreds or even thousands of queries. Even the proposal in [4], [65] achieves evasion through dozens of attempts against the target detector.…”
Section: Related Workmentioning
confidence: 99%
“…they cannot have access to internal ML algorithm informations, and they can just make remote tests for getting information on what kind of attacks ML-based detectors can detect or miss, and explain why. Such black-box approach has been adopted by authors of papers like [13]. In our paper we adopt an intermediate approach, i.e.…”
Section: Related Workmentioning
confidence: 99%