2021
DOI: 10.1016/j.jisa.2021.102993
|View full text |Cite
|
Sign up to set email alerts
|

SocialGuard: An adversarial example based privacy-preserving technique for social images

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…From the security point of view, Adversarial Attacks (AA) showed that deep learning models can be easily fooled [ 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ] while, from a privacy point of view, it has been shown that information can be easily extracted from dataset and learned model [ 26 , 27 , 28 ]. It has also been shown that attacking methods based on adversarial samples can be used for privacy-preserving purposes [ 29 , 30 , 31 , 32 , 33 ]: in this case, data are intentionally modified to avoid unauthorized information extraction by fooling the unauthorized software.…”
Section: Introductionmentioning
confidence: 99%
“…From the security point of view, Adversarial Attacks (AA) showed that deep learning models can be easily fooled [ 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ] while, from a privacy point of view, it has been shown that information can be easily extracted from dataset and learned model [ 26 , 27 , 28 ]. It has also been shown that attacking methods based on adversarial samples can be used for privacy-preserving purposes [ 29 , 30 , 31 , 32 , 33 ]: in this case, data are intentionally modified to avoid unauthorized information extraction by fooling the unauthorized software.…”
Section: Introductionmentioning
confidence: 99%
“…The effectiveness of adversarial patches and noise-based methods in realworld scenarios, as well as their susceptibility to evolving facial recognition technologies, remains for exploration [211,193,172,228]. Moreover, the specificity and target-dependency of current methods based on Generative Adversarial Network (GAN) indicate a need for more versatile and generalizable solutions [119,206,173,90,56,25,34,211,194,215,14,216,144,104,10,198]. Addressing these gaps is essential for the advancement of privacy-preserving technologies during a period in which digital identity and security are increasingly important.…”
mentioning
confidence: 99%
“…Related Works of model misbehaving. Attack strategies include directly modifying digital images with adversarial perturbations before online sharing [216,194,19], wearing physical objects like adversarial T-shirts or makeup that introduce perturbations into any captured photo [82,16,215,211], or placing translucent stickers on camera lenses to disrupt face detection in images [230]. Thys et al [193] expanded the spectrum of adversarial attacks on targets exhibiting high intra-class variety, particularly focusing on human body detection.…”
mentioning
confidence: 99%