2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01195
|View full text |Cite
|
Sign up to set email alerts
|

DISCO: Dynamic and Invariant Sensitive Channel Obfuscation for deep neural networks

Abstract: Recent deep learning models have shown remarkable performance in image classification. While these deep learning systems are getting closer to practical deployment, the common assumption made about data is that it does not carry any sensitive information. This assumption may not hold for many practical cases, especially in the domain where an individual's personal information is involved, like healthcare and facial recognition systems. We posit that selectively removing features in this latent space can protec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(25 citation statements)
references
References 52 publications
0
24
0
Order By: Relevance
“…-Oriented Sampling -Agnostic Noise (OS-AN): Uses differentiable point cloud sampling (as in CBNS) with fixed gaussian distribution for noise. This is inspired from [28] which does channel pruning of image activations to remove sensitive information. While [28] trains a DNN to prune neural activations, we train a DNN to sample point clouds [31].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…-Oriented Sampling -Agnostic Noise (OS-AN): Uses differentiable point cloud sampling (as in CBNS) with fixed gaussian distribution for noise. This is inspired from [28] which does channel pruning of image activations to remove sensitive information. While [28] trains a DNN to prune neural activations, we train a DNN to sample point clouds [31].…”
Section: Methodsmentioning
confidence: 99%
“…This is inspired from [28] which does channel pruning of image activations to remove sensitive information. While [28] trains a DNN to prune neural activations, we train a DNN to sample point clouds [31].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition to the computational and communication benefit, split learning allows distributed and privacy-preserving prediction that is not possible under the federated learning framework. Consequently, several works have used split learning for inference to build defense [7,45,50,57,61,69,70,74,84,87] and attack mechanisms [32,53,64]. Some of these benefits have led to applied evaluation of split learning for mobile phones [62], IoT [37,38,63], model selection [71,72] and healthcare [65].…”
Section: Related Workmentioning
confidence: 99%