2020
DOI: 10.48550/arxiv.2012.06337
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Privacy and Robustness in Federated Learning: Attacks and Defenses

Abstract: As data are increasingly being stored in different silos and societies becoming more aware of data privacy issues, the traditional centralized training of artificial intelligence (AI) models is facing efficiency and privacy challenges. Recently, federated learning (FL) has emerged as an alternative solution and continue to thrive in this new reality. Existing FL protocol design has been shown to be vulnerable to adversaries within or outside of the system, compromising data privacy and system robustness. Besid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
52
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 40 publications
(53 citation statements)
references
References 105 publications
0
52
0
1
Order By: Relevance
“…These attacks poison local model updates before uploading them to the server. More details of poisoning attacks and other threats of FL can be found from the survey paper [40].…”
Section: Robust Federated Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…These attacks poison local model updates before uploading them to the server. More details of poisoning attacks and other threats of FL can be found from the survey paper [40].…”
Section: Robust Federated Learningmentioning
confidence: 99%
“…Following[40,49,54], we use LDP to refer to the client based approaches for ease of presentation, but it is different from the traditional LDP for data collection in[21,29,57].…”
mentioning
confidence: 99%
“…Several survey works [27], [73]- [80] have also summarized some of the threats and defenses in collaborative learning. However, they have certain drawbacks.…”
Section: Introductionmentioning
confidence: 99%
“…. For example, Second, several surveys [73], [74], [79] mainly target on the threats and defenses in federated learning. Vepakommacite et al vepakomma2018no summarize the privacy problems and defenses in distributed learning systems.…”
Section: Introductionmentioning
confidence: 99%
“…However, FLNs with frequent model updates and communications are vulnerable to various types of privacy attacks, such as eavesdropping attacks, inference attacks, poisoning attacks and backdoor attacks, which in turn, have inspired a myriad of defense mechanisms (a.k.a. countermeasures) [3].…”
Section: Introductionmentioning
confidence: 99%