2020 IEEE Symposium on Security and Privacy (SP) 2020
DOI: 10.1109/sp40000.2020.00025
|View full text |Cite
|
Sign up to set email alerts
|

The Value of Collaboration in Convex Machine Learning with Differential Privacy

Abstract: In this paper, we apply machine learning to distributed private data owned by multiple data owners, entities with access to non-overlapping training datasets. We use noisy, differentially-private gradients to minimize the fitness cost of the machine learning model using stochastic gradient descent. We quantify the quality of the trained model, using the fitness cost, as a function of privacy budget and size of the distributed datasets to capture the trade-off between privacy and utility in machine learning. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
44
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(47 citation statements)
references
References 26 publications
1
44
0
Order By: Relevance
“…Training ML in a distributed manner can naturally provide a certain level of privacy protection, as the local training data points are usually not shared among users. Moreover, different privacy protection schemes in centralised learning, such as encryption, perturbation, can be easily extended to the distributed learning settings [169]. In this sense, private ML in distributed systems have a lot in common with that of centralised ML.…”
Section: Privacy In Distributed Learning Systemsmentioning
confidence: 99%
“…Training ML in a distributed manner can naturally provide a certain level of privacy protection, as the local training data points are usually not shared among users. Moreover, different privacy protection schemes in centralised learning, such as encryption, perturbation, can be easily extended to the distributed learning settings [169]. In this sense, private ML in distributed systems have a lot in common with that of centralised ML.…”
Section: Privacy In Distributed Learning Systemsmentioning
confidence: 99%
“…Some works focus on the application of DP in machine learning. In [28], the authors apply DP to the distributed machine learning and quantify the model value. The authors in [29] investigate the DP-enabled federated learning mechanism, which designs the algorithm and analyzes the performance.…”
Section: B Federated Learning and Privacy Protectionmentioning
confidence: 99%
“…More recent work proposed DP-based additions to existing machine learning algorithms to allow the learning of models with strong privacy assurance. In the context of federated learning, one example proposes a DP-enhanced gradient descent algorithm, which enables the distributed learning of a model from multiple datasets hosted by different data custodians (Wu et al, 2020).…”
Section: Provable Privacy Enhancing Mechanismsmentioning
confidence: 99%