2021
DOI: 10.1109/jsac.2020.3041404
|View full text |Cite
|
Sign up to set email alerts
|

Byzantine-Resilient Secure Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
91
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 157 publications
(91 citation statements)
references
References 15 publications
0
91
0
Order By: Relevance
“…Byzantine-resilient defenses One popular defense mechanism against untargeted model update poisoning attacks, especially Byzantine attacks, replaces the averaging step on the server with a robust estimate of the mean, such as median-based aggregators [116,497], Krum [76], and trimmed mean [497]. Past work has shown that various robust aggregators are provably effective for Byzantine-tolerant distributed learning [436,76,116] under appropriate assumptions, even in federated settings [379,486,427]. Despite this, Fang et al [183] recently showed that multiple Byzantine-resilient defenses did little to defend against model poisoning attacks in federated learning.…”
Section: Model Update Poisoningmentioning
confidence: 99%
“…Byzantine-resilient defenses One popular defense mechanism against untargeted model update poisoning attacks, especially Byzantine attacks, replaces the averaging step on the server with a robust estimate of the mean, such as median-based aggregators [116,497], Krum [76], and trimmed mean [497]. Past work has shown that various robust aggregators are provably effective for Byzantine-tolerant distributed learning [436,76,116] under appropriate assumptions, even in federated settings [379,486,427]. Despite this, Fang et al [183] recently showed that multiple Byzantine-resilient defenses did little to defend against model poisoning attacks in federated learning.…”
Section: Model Update Poisoningmentioning
confidence: 99%
“…Furthermore, the authors use additive secret sharing to protect the privacy of the data, which provide weaker guarantees than DP. We also mention BREA [33], that is a single-server approach but does not use DP either. The approach that might be the most related to our work is LearningChain [10] since it seems to be the only other framework that combines DP and Byzantine resilience.…”
Section: Related Workmentioning
confidence: 99%
“…In [18], [22], [19] and [21], the proposed defences 439 consumed additional computation time to compute gradients, high dimensional update vectors, Principal Component Analysis, and reconstruction errors, respectively. The solutions proposed in [20] and [23] are required to compute dissimilarity and related information through clients. Thus incurred additional computation cost.…”
Section: E Evaluation Metricsmentioning
confidence: 99%
“…In contrast, to [20] and [23], which involve clients in the identification and verification procedure of the poison local model weights, TIMPANY, evaluates the local model weights without indulging the clients in the evaluation procedure at the server side. As a result, TIMPANY provides complete security and privacy to the clients, i.e., no information gets disclosed to another client in the FL setup.…”
Section: ) Security and Privacymentioning
confidence: 99%
See 1 more Smart Citation