2022
DOI: 10.48550/arxiv.2203.17005
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Privacy-Preserving Aggregation in Federated Learning: A Survey

Abstract: Over the recent years, with the increasing adoption of Federated Learning (FL) algorithms and growing concerns over personal data privacy, Privacy-Preserving Federated Learning (PPFL) has attracted tremendous attention from both academia and industry. Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated to construct a global model in a privacy-preserving manner. As such, Privacy-Preserving Aggregation (PPAgg) as the key protocol in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 183 publications
(268 reference statements)
0
2
0
Order By: Relevance
“…(10) Ziyao Liu et al [29] summarized federated learning privacy attacks and defense schemes, and they also presented the current challenges related to this area.…”
Section: Client Servermentioning
confidence: 99%
See 1 more Smart Citation
“…(10) Ziyao Liu et al [29] summarized federated learning privacy attacks and defense schemes, and they also presented the current challenges related to this area.…”
Section: Client Servermentioning
confidence: 99%
“…Major contents [20] early work that concludes data poisoning, model poisoning, and defense [21] problems of communication, poisoning attacks, inference attacks and privacy leakage [22] the concept of semi-supervised federated learning and applications [23] defenses against model poisoning and privacy inference attacks [24] block chain-based privacy protection for federated learning [25] privacy protection classification of federated learning and the defenses [26] federated learning privacy protection, communication overhead, and malicious participant defenses [27] defense methods for model poisoning [28] federated learning privacy protection convergence programme [29] survey and evaluation of federated learning privacy attacks and defenses programs [30] federated learning robustness, privacy attacks, and defenses…”
Section: Refmentioning
confidence: 99%