2020
DOI: 10.2478/popets-2021-0006
|View full text |Cite
|
Sign up to set email alerts
|

On the (Im)Practicality of Adversarial Perturbation for Image Privacy

Abstract: Image hosting platforms are a popular way to store and share images with family members and friends. However, such platforms typically have full access to images raising privacy concerns. These concerns are further exacerbated with the advent of Convolutional Neural Networks (CNNs) that can be trained on available images to automatically detect and recognize faces with high accuracy.Recently, adversarial perturbations have been proposed as a potential defense against automated recognition and classification of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(32 citation statements)
references
References 55 publications
0
28
0
Order By: Relevance
“…Of particular interest are works by Gao et al [20] and Cherepanova et al [12] that develop transferable adversarial examples by optimizing in the metric space of facial recognition networks and study how much distortion individuals are willing to accept in their photos. In addition, Rajabi et al [47] and Oh et al [43] develop new approaches for generating adversarial examples against facial recognition that do not rely on the "standard" methods from [10,40] and show that they are robust even in the face of countermeasures.…”
Section: Adversarial Examples and Face Recognitionmentioning
confidence: 99%
“…Of particular interest are works by Gao et al [20] and Cherepanova et al [12] that develop transferable adversarial examples by optimizing in the metric space of facial recognition networks and study how much distortion individuals are willing to accept in their photos. In addition, Rajabi et al [47] and Oh et al [43] develop new approaches for generating adversarial examples against facial recognition that do not rely on the "standard" methods from [10,40] and show that they are robust even in the face of countermeasures.…”
Section: Adversarial Examples and Face Recognitionmentioning
confidence: 99%
“…The game dynamics change if the user can use adversarial examples to evade the model [53,46,54,13,8,41,38,9,1]. Such evasion attacks favor the attacker: the defender must first commit to a defense and the attacker can then adapt their strategy accordingly [55].…”
Section: Poisoning Attack Gamesmentioning
confidence: 99%
“…A growing body of research explores how tools from adversarial machine learning can help users fight back [46,38,54,26,45,12,13,64,66,25,7,41,1]. We revisit a recently proposed approach where users perturb the pictures they post online, in order to poison facial recognition models into misidentifying unperturbed pictures (e.g., a picture taken by a stalker or by the police).…”
Section: Introductionmentioning
confidence: 99%
“…Similar ideas of using grayscale transformation have also used in [17,37] Privacy Protection. In addition to data poisoning approaches that can be used to make user data unexploitable during training [7,13,30,31], approaches based on adversarial machine learning have also been developed to protect privacy by misleading the machines during test time, for in-stance, in person-related recognition [22,23,25] and social media mining [18,20]. Privacy attributes in images were analyzed in depth by [24,27].…”
Section: Related Workmentioning
confidence: 99%