2022
DOI: 10.48550/arxiv.2207.00872
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FL-Defender: Combating Targeted Attacks in Federated Learning

Abstract: Federated learning (FL) enables learning a global machine learning model from local data distributed among a set of participating workers. This makes it possible i) to train more accurate models due to learning from rich joint training data, and ii) to improve privacy by not sharing the workers' local private data with others. However, the distributed nature of FL makes it vulnerable to targeted poisoning attacks that negatively impact the integrity of the learned model while, unfortunately, being difficult to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 20 publications
(24 reference statements)
0
2
0
Order By: Relevance
“…For example, an attacker could train a model to recognize a stop sign but also include a hidden trigger that causes misclassification leading to unsafe situations in the real world. In FL, a malicious client k can inject a backdoor to the global model (W t+1 ) by manipulating its local model W (t) k [3], [8], [13], [17], [19]. Prior approaches [13], [19] propose defenses by changing the underline FL training protocol (e.g., changes in the FedAvg protocol).…”
Section: Background and Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, an attacker could train a model to recognize a stop sign but also include a hidden trigger that causes misclassification leading to unsafe situations in the real world. In FL, a malicious client k can inject a backdoor to the global model (W t+1 ) by manipulating its local model W (t) k [3], [8], [13], [17], [19]. Prior approaches [13], [19] propose defenses by changing the underline FL training protocol (e.g., changes in the FedAvg protocol).…”
Section: Background and Related Workmentioning
confidence: 99%
“…Since the server does not have access to the raw training data of the clients and such attacks remain hidden until the trigger is present in the input, it is difficult to detect the presence of a backdoor. This makes it challenging to defend against such attacks in the FL setting [3], [8], [13], [19].…”
Section: Introductionmentioning
confidence: 99%