Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2022
DOI: 10.21203/rs.3.rs-1900743/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey on Federated Learning PoisoningAttacks and Defenses

Abstract: As one kind of distributed machine learning technique, federated learning enables multiple clients to build a model across decentralized datacollaboratively without explicitly aggregating the data. Due to its abilityto break data silos, federated learning has received increasing attentionin many fields, including finance, healthcare, and education. However,the invisibility of clients’ training data and the local training process result in some security issues. Recently, many works have beenproposed to research… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 62 publications
0
2
0
Order By: Relevance
“…Data poisoning attacks and defenses have been demonstrated in the past on face recognition tasks [210]. These attacks also find a strong use case in the context of federated learning [213], [214], which are ripe for attacks via the data collection channel. Here, we find prior work on poisoning face recognition tasks in a federated learning context [215].…”
Section: ) Data Poisoning Attacksmentioning
confidence: 99%
“…Data poisoning attacks and defenses have been demonstrated in the past on face recognition tasks [210]. These attacks also find a strong use case in the context of federated learning [213], [214], which are ripe for attacks via the data collection channel. Here, we find prior work on poisoning face recognition tasks in a federated learning context [215].…”
Section: ) Data Poisoning Attacksmentioning
confidence: 99%
“…The goal of the first attack is to make the model unable to converge. The goal of the second attack is to use the global model for secret communication [61]. Milad Nasr et al designed white-box inference attacks for deep learning models and more specifically for FL use cases.…”
Section: Attacks On Federated Learningmentioning
confidence: 99%