2020
DOI: 10.1007/978-3-030-63076-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Towards Byzantine-Resilient Federated Learning via Group-Wise Robust Aggregation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 1 publication
0
8
0
Order By: Relevance
“…The aggregation algorithm in FL is vulnerable to adversarial attacks [71], [72]. In the presence of label-flipping, backdoor attacks, and noisy updates, non-robust aggregation algorithms will produce unusable and compromised models.…”
Section: ) Non-robust Aggregationmentioning
confidence: 99%
“…The aggregation algorithm in FL is vulnerable to adversarial attacks [71], [72]. In the presence of label-flipping, backdoor attacks, and noisy updates, non-robust aggregation algorithms will produce unusable and compromised models.…”
Section: ) Non-robust Aggregationmentioning
confidence: 99%
“…Their work, however, is limited in that they considered only IID settings. [32] proposed a group-wise aggregation approach to address data heterogeneity, but not to defend against attacks. They developed a clustering algorithm on model parameters to group them so that if a new client comes in, its cluster assignment is determined by estimating the average center of the cluster.…”
Section: Defenses Against Anomalous Attacks In Federated Learningmentioning
confidence: 99%
“…However, the server end may not own such data and it is difficult to determine which data should be collected. Besides, Li et al [28] proposed to use conv layers' weights of uploaded models to build an auto-encoder to detect malicious clients and Yu & Wu [59] attempted to directly use the weights to distinguish malicious clients for robust aggregation. Fu et al [9] applied a weighting scheme to give a low weight to uploaded attacked models, mitigating their negative influences.…”
Section: Mobile Federated Learningmentioning
confidence: 99%
“…• IRLS [9]: IRLS applies a weighting scheme that gives a lower weight to uploaded attacked models, thus mitigating their negative influence. • GRA [59]: The goal of GRA is to directly use the weights to distinguish malicious clients for robust aggregation.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation