2020
DOI: 10.1109/tifs.2020.2983254
|View full text |Cite
|
Sign up to set email alerts
|

Deep Domain Adaptation With Differential Privacy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 35 publications
0
6
0
Order By: Relevance
“…Tis framework establishes the sensitivity of the DP mechanism. DPDA [39] adds Gaussian noise to specifc layers and corresponding gradients to achieve (ϵ, δ)-DP while using an adversarial learning strategy to obtain domain-invariant features for the classifcation of unlabeled target domain data.…”
Section: Diferentially Private Transfermentioning
confidence: 99%
See 1 more Smart Citation
“…Tis framework establishes the sensitivity of the DP mechanism. DPDA [39] adds Gaussian noise to specifc layers and corresponding gradients to achieve (ϵ, δ)-DP while using an adversarial learning strategy to obtain domain-invariant features for the classifcation of unlabeled target domain data.…”
Section: Diferentially Private Transfermentioning
confidence: 99%
“…On these datasets, we consider the eight main DA tasks that have been studied [15,16,39]. Te following are eight visual tasks and data processing settings, In the three domains, the image resolutions are diferent.…”
Section: Visual Damentioning
confidence: 99%
“…Therefore, it is interesting to investigate the performance of multimodal personality trait recognition methods in cross-data set environment. To address this issue, deep domain adaption methods (Wang et al, 2020;Kurmi et al, 2021;Shao and Zhong, 2021) may be an alternative. Note that the display of personality traits and the traits themself can be considered as context-dependent.…”
Section: Personality Trait Recognition Data Setsmentioning
confidence: 99%
“…[26] proposed randomizing the shared local model weights to protect from gradient leakage attack. [51] performed domain adaptation in an adversarial-training manner. However, [33] has shown that although adversarial training can make it difficult for the adversary to recover the input data, it cannot entirely remove the sensitive information from the data representations.…”
Section: Domain Adaptation With Privacy Concernmentioning
confidence: 99%