2018
DOI: 10.1007/978-3-030-01701-9_30
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Preserving Multiparty Learning for Logistic Regression

Abstract: In recent years, machine learning techniques are widely used in numerous applications, such as weather forecast, financial data analysis, spam filtering, and medical prediction. In the meantime, massive data generated from multiple sources further improve the performance of machine learning tools. However, data sharing from multiple sources brings privacy issues for those sources since sensitive information may be leaked in this process. In this paper, we propose a framework enabling multiple parties to collab… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2019
2019
2025
2025

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(9 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Du et al [35] employed the differential privacy mechanism [36] to preserve privacy in multi-party logistic regression model training. The authors approximated the objective function using Taylor's expansion.…”
Section: Related Workmentioning
confidence: 99%
“…Du et al [35] employed the differential privacy mechanism [36] to preserve privacy in multi-party logistic regression model training. The authors approximated the objective function using Taylor's expansion.…”
Section: Related Workmentioning
confidence: 99%
“…The second method is differential privacy, a universally accepted mathematical structure for protecting data privacy. The main application of differential privacy in machine learning is when the model is published publicly after training in a way that personal data points cannot be distinguished from the released model [18][19] [20] [21].…”
Section: Related Workmentioning
confidence: 99%
“…Due to the high effectiveness and efficiency, DP has been extensively applied in general machine learning [30], [36]- [39] as well as federated learning algorithms [7], [9], [31], [32], [40], [41]. When implementing DP in machine learning, Laplace or Gaussian mechanism is usually adopted to add properly calibrated noise according to the global sensitivity of gradient's norm, which, however, is difficult to estimate in many machine learning models, especially the deep learning.…”
Section: Related Workmentioning
confidence: 99%