2022 41st International Symposium on Reliable Distributed Systems (SRDS) 2022
DOI: 10.1109/srds55811.2022.00017
|View full text |Cite
|
Sign up to set email alerts
|

Never Too Late: Tracing and Mitigating Backdoor Attacks in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 24 publications
0
7
0
Order By: Relevance
“…Figure 3 shows an example of inference attacks. In a backdoor attack, the attacker's goal is to destroy the global FL model and replace the actual global FL model with the attacker's model [74]. This attack can also be classified as a model poisoning attack but it is more harmful than poisoning attacks [60].…”
Section: Privacy Leakage and Threats In Flmentioning
confidence: 99%
“…Figure 3 shows an example of inference attacks. In a backdoor attack, the attacker's goal is to destroy the global FL model and replace the actual global FL model with the attacker's model [74]. This attack can also be classified as a model poisoning attack but it is more harmful than poisoning attacks [60].…”
Section: Privacy Leakage and Threats In Flmentioning
confidence: 99%
“…Person Re-identification (ReID) is a fundamental yet challenging task in computer vision (CV), playing a paramount role in plenty of applications such as intelligent video surveillance, urban security, and smart retailing (Ye et al 2021;Zeng et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…Federated Learning (FL) (McMahan et al 2017) enables multiple devices to jointly train machine learning models without sharing their raw data. Due to the unreachability to distributed data, it is vulnerable to attacks from malicious clients (Wang et al 2020), especially backdoor attacks that neither significantly alter the statistical characteristics of models as Gaussian-noise attacks (Blanchard et al 2017) nor cause a distinct modification to the training data as label-flipping attacks (Liu et al 2021), and thus, are more covert against many existing defenses (Zeng et al 2022).…”
Section: Introductionmentioning
confidence: 99%
“…The former may negatively impact global model accuracy (Yu et al 2021). The latter assumes globally clear boundaries between benign and infected model updates (Zeng et al 2022). However, backdoor attacks typically manipulate a limited subset of parameters, resulting in the similarity between benign and infected model updates.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation