2019
DOI: 10.48550/arxiv.1905.05897
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
44
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(45 citation statements)
references
References 13 publications
1
44
0
Order By: Relevance
“…Liu et al [13] proposed to use adversarial perturbation to protect image privacy from both humans and AI. Zhu et al [14] introduced a new "polytope attack" in which poison images were designed to surround the targeted image in the feature space. Taking both ideas into account, Fawkes [15], the state-of-the-art method, helped users wearing imperceptible "cloaks" to their own photos before releasing them.…”
Section: Adversarial Perturbation-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al [13] proposed to use adversarial perturbation to protect image privacy from both humans and AI. Zhu et al [14] introduced a new "polytope attack" in which poison images were designed to surround the targeted image in the feature space. Taking both ideas into account, Fawkes [15], the state-of-the-art method, helped users wearing imperceptible "cloaks" to their own photos before releasing them.…”
Section: Adversarial Perturbation-based Methodsmentioning
confidence: 99%
“…Traditional anonymization techniques are mainly obfuscation-based and always significantly alter the original face. Other previous work in this field is sparse and limited in both practicality and efficacy: ksame algorithm-based methods [5][6][7][8][9] fail to make full use of existing data and deliver fairly poor visual quality; adversarial perturbation-based methods [10][11][12][13][14][15] usually depend highly on the accessibility of the target system and require special training; recent GAN-based methods [16][17][18][19][20][21][22][23][24][25][26] have trouble generating visually similar de-identified faces as well. Note that there exists a trade-off between privacy protection and dataset utility [27,28], and previous methods are unable to balance this matter.…”
Section: Introductionmentioning
confidence: 99%
“…Several data-poisoning like attacks (Gu et al, 2017;Liu et al, 2018b) utilize patch/watermark triggers. Clean-label attacks (Shafahi et al, 2018;Saha et al, 2020;Turner et al, 2019;Zhao et al, 2020;Zhu et al, 2019) inject back-door without changing data label. Salem et al (2020) leveraged GAN to construct dynamic triggers with random patterns and locations.…”
Section: Related Workmentioning
confidence: 99%
“…However, the method proposed by Shafahi et al [98] requires a complete or a query access to the victim model. Then, Zhu et al [110] assumed the victim model is not accessible to the attacker and proposed a new convex polytope attack in which poison images are designed to surround the targeted image in the feature space.…”
Section: B Backdoor Attacks 1) Data Poisoningmentioning
confidence: 99%