2022
DOI: 10.1609/aaai.v36i6.20586
|View full text |Cite
|
Sign up to set email alerts
|

DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy

Abstract: Training deep neural networks (DNNs) for meaningful differential privacy (DP) guarantees severely degrades model utility. In this paper, we demonstrate that the architecture of DNNs has a significant impact on model utility in the context of private deep learning, whereas its effect is largely unexplored in previous studies. In light of this missing, we propose the very first framework that employs neural architecture search to automatic model design for private deep learning, dubbed as DPNAS. To integrate pri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…Wu et al [15] propose an Adaptive Differentially Private Stochastic Gradient Descent (ADPSGD) algorithm, which adjusts the random noise added to the gradient by adaptive step size. Combining private learning with architectural search, Cheng et al [16] propose the DPNASNet model, which achieves a state-of-the-art privacy/utility trade-off.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Wu et al [15] propose an Adaptive Differentially Private Stochastic Gradient Descent (ADPSGD) algorithm, which adjusts the random noise added to the gradient by adaptive step size. Combining private learning with architectural search, Cheng et al [16] propose the DPNASNet model, which achieves a state-of-the-art privacy/utility trade-off.…”
Section: Related Workmentioning
confidence: 99%
“…To verify the effectiveness of the AFRRS mechanism, the experiments were designed to compare TSA [10], BC [11], DPL-GGC [12], DP-PSAC [13], AUTO clipping [14], ADPSGD [15], DPNASNet [16], and the deep learning model without differential privacy protection (No DP). The results are shown in Table 3 and Table 4.…”
Section: Effectiveness Evaluation Of Different Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, [12] suggested modifying the loss function to promote a fast convergence in DP-SGD. [2] developed a paradigm for neural network architecture search with differential privacy constraints, while [14] a gradient hard thresholding framework that provides good utility guarantees. [8] proposed a grouped gradient clipping mechanism to modulate the gradient weights.…”
Section: Related Workmentioning
confidence: 99%