2023
DOI: 10.1016/j.knosys.2022.110178
|View full text |Cite
|
Sign up to set email alerts
|

FL-Defender: Combating targeted attacks in federated learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 31 publications
(11 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…We can see from the table that our layered defense framework can defend well against different types of backdoor attacks. Compared with the most advanced FL-Defender [ 21 ], the defense effect is almost identical or even better. Our main task model accuracy (CA) is comparable to the model accuracy of clean dataset training, reaching more than 90%.…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…We can see from the table that our layered defense framework can defend well against different types of backdoor attacks. Compared with the most advanced FL-Defender [ 21 ], the defense effect is almost identical or even better. Our main task model accuracy (CA) is comparable to the model accuracy of clean dataset training, reaching more than 90%.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…In addition, more types of backdoor attacks can be defended, including anti-label inversion attacks, feature collision attacks, dynamic backdoor attacks, and clean-label attacks. However, FL-Defender [ 21 ], FoolsGold [ 28 ] are not a good defense against dynamic and clean-label backdoor attacks. For example, the accuracy of the main task drops to 63.5%, and the attack success rate of constrain-and-scale [ 14 ] is 100%.…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparison of this survey with the existing literature is summarized in Table I. It can be seen that some existing surveys, e.g., [44], [45], [46], and [47] considered backdoor attacks and backdoor defenses as a part of robustness threat on WFL, however, the limitations of the existing backdoor attack and defense methods were not highlighted. On the other hand, in [48], [49], [50], and [51], WFL was considered as one of the deep learning applications when discussing the impact of backdoor attacks, but no detailed analysis of vulnerabilities of backdoor attacks on WFL was provided.…”
Section: B Review Of Existing Surveys and Gap Analysismentioning
confidence: 99%
“…† Both authors contributed equally to this research. on the global model by compromising their local models through either (i) untargeted attacks aimed at slowing down the learning process or decreasing the overall performance of the global model [6], or (ii) targeted attacks (also known as backdoor attacks) that introduce a backdoor into a model, causing it to exhibit malicious behaviors when the inputs contain a predefined trigger [7], [8]. Many poisoning attacks can swiftly impact the global model's performance or inject backdoors within a few FL rounds, persisting for a duration exceeding numerous rounds [9], thereby raising significant security concerns [10], [11].…”
Section: Introductionmentioning
confidence: 99%