2020
DOI: 10.1109/tmm.2020.2987694
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Vulnerabilities of Deep Neural Networks for Privacy Protection

Abstract: Adversarial perturbations can be added to images to protect their content from unwanted inferences. These perturbations may, however, be ineffective against classifiers that were not seen during the generation of the perturbation, or against defenses based on re-quantization, median filtering or JPEG compression. To address these limitations, we present an adversarial attack that is specifically designed to protect visual content against unseen classifiers and known defenses. We craft perturbations using an it… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…There are differences in these five parts of the current mainstream CNN models ( Khan et al, 2019 ). Since Resnet solves the problem of network degradation in DL, it becomes the backbone network for subsequent research ( Sanchez-Matilla et al, 2020 ; Veit and Belongie, 2020 ; Zhang et al, 2020b ). In this study, Resnet-18 was used as the backbone network to construct CNNs ( He et al, 2016 ).…”
Section: Methodsmentioning
confidence: 99%
“…There are differences in these five parts of the current mainstream CNN models ( Khan et al, 2019 ). Since Resnet solves the problem of network degradation in DL, it becomes the backbone network for subsequent research ( Sanchez-Matilla et al, 2020 ; Veit and Belongie, 2020 ; Zhang et al, 2020b ). In this study, Resnet-18 was used as the backbone network to construct CNNs ( He et al, 2016 ).…”
Section: Methodsmentioning
confidence: 99%
“…Given an image as input, the adversarial attack can be realized by adding imperceptible noises or applying natural transformations. On the one hand, several adversarial noise attack methods got promising attack results, such as gradient computation based fast gradient sign method (FGSM) [28], iterative-version FGSM [29], momentum iterative FGSM [30], different distance metrics based C&W method [31], attended regions and features based TAA method [32], randomization based [33], perceptually aware and stealthy adversarial denoise [34], and so on. On the other hand, some natural transformations that are imperceptible to humans can be applied for image attack.…”
Section: General Adversarial Attacksmentioning
confidence: 99%
“…Recognizing scenes is challenging since it involves understanding the context of the characteristic concepts in different scene categories. We consider a specific task called Pixel Privacy [70], which was introduced by the MediaEval Multimedia Benchmark, and has been explored in the following work on adversarial images [32], [71], [72]. This task is focused on developing image modification techniques that can protect privacy-sensitive scene information of users against automatic inference of privacy-sensitive scene information.…”
Section: Scene Recognitionmentioning
confidence: 99%