2018
DOI: 10.1007/978-3-030-01270-0_37
|View full text |Cite
|
Sign up to set email alerts
|

Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study

Abstract: This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed framework explicitly learns a degradation transform for the original video inputs, in order to optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. A notable challenge is that the privacy budget, often defined and measured in task-driven contexts, cannot be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
78
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4
1

Relationship

2
8

Authors

Journals

citations
Cited by 124 publications
(79 citation statements)
references
References 62 publications
0
78
0
Order By: Relevance
“…In fact, the two suboptimizations in (1) denote an iterative routine to solve this unified form (performing coordinate descent between {f T , f O }, and f N ). This form can easily capture many other settings or scenarios, e.g., privacy-preserving visual recognition [51,48] where f T encodes features to avoid peeps from f N while preserving utility for f O .…”
Section: Formulation Of Ndftmentioning
confidence: 99%
“…In fact, the two suboptimizations in (1) denote an iterative routine to solve this unified form (performing coordinate descent between {f T , f O }, and f N ). This form can easily capture many other settings or scenarios, e.g., privacy-preserving visual recognition [51,48] where f T encodes features to avoid peeps from f N while preserving utility for f O .…”
Section: Formulation Of Ndftmentioning
confidence: 99%
“…While differentialprivacy based perturbation approaches are emerging (e.g., [17,18]) they are currently not reversible. Perturbation schemes that thwart one pre-specified classifier while allowing another pre-specified classifier to classify correctly (e.g., [31,79]) have also been proposed. However, those approaches are also not reversible.…”
Section: Lack Of Strong Guaranteesmentioning
confidence: 99%
“…Seong et al introduce an adversarial game to learn the image obfuscation strategy, in which the user and recogniser (attacker) strive for antagonistic goals: dis-/enabling recognition [32]. Wu et al [41] propose an adversarial framework to learn the degradation transformation (e.g. anonymized video) of video inputs.…”
Section: Related Workmentioning
confidence: 99%