2021
DOI: 10.1109/tpds.2021.3064345
|View full text |Cite
|
Sign up to set email alerts
|

Accurate Differentially Private Deep Learning on the Edge

Abstract: For validation and demonstration of high accuracy ranging and positioning algorithms and systems, a wideband radio signal generation and acquisition testbed, tightly synchronized in time and frequency, is needed. The development of such a testbed requires solutions to several challenges. Tight time and frequency synchronization, derived from a centrally distributed time-frequency reference signal, needs to be maintained in the hardware of the transmitter and receiver nodes, and wideband signal acquisition requ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 64 publications
0
4
0
Order By: Relevance
“…(2) The principle of Purpose Limitation In this article, the FL server exposes the specific purpose of its model training to the client, who decides whether or not to participate in the training. At the same time, because the transmitted model is protected by differential privacy technology [31], the FL service provider cannot infer information from the model beyond the training task and, therefore, cannot use it for other purposes.…”
Section: Privacy Analysismentioning
confidence: 99%
“…(2) The principle of Purpose Limitation In this article, the FL server exposes the specific purpose of its model training to the client, who decides whether or not to participate in the training. At the same time, because the transmitted model is protected by differential privacy technology [31], the FL service provider cannot infer information from the model beyond the training task and, therefore, cannot use it for other purposes.…”
Section: Privacy Analysismentioning
confidence: 99%
“…Differential privacy [13]- [15] has been widely used as a mainstream data perturbation mechanism for privacypreserving deep learning. Shokri and Shmatikov [6] proposed a distributed deep learning scheme with participant privacy in mind.…”
Section: Related Workmentioning
confidence: 99%
“…Some works [122], [153]- [155], [158], [161], [163]- [167] applied the DP techniques from the standalone mode to the distributed systems to preserve the privacy of the training data for each agent. For example, Shokri et al [158] proposed a privacy-preserving distributed learning algorithm by adding Laplacian noise to each agent's gradients to prevent indirect leakage.…”
Section: A Differentially Private Collaborative Learningmentioning
confidence: 99%
“…Shokri et al [158] and Wei et al [159] composed the additive noise mechanisms using the strong composition theorem [152], leading to a linear increase in the privacy budget. In order to reduce aggregated noise in local updates, Han et al [163]dynamically adjust batch size and noise level according to the rate of critical input data and the sensitivity estimation.…”
Section: A Differentially Private Collaborative Learningmentioning
confidence: 99%