2020
DOI: 10.1007/978-3-030-65745-1_12
|View full text |Cite
|
Sign up to set email alerts
|

Model Poisoning Defense on Federated Learning: A Validation Based Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…We can divide methods for detecting poisoning attacks based on evaluating model performance into two categories: evaluating local models [55], [57], [59], [62] and evaluating global models [56], [60], [63].…”
Section: ) Performance Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…We can divide methods for detecting poisoning attacks based on evaluating model performance into two categories: evaluating local models [55], [57], [59], [62] and evaluating global models [56], [60], [63].…”
Section: ) Performance Evaluationmentioning
confidence: 99%
“…For example, Minghong et al [55] removed the local model that significantly negatively impacts the accuracy and loss of the global model. Yuao et al [57] updated the global model using only the local model that performed well on the test set and marked the client that uploaded a low-accuracy model as a malicious client. Furthermore, Mallah et al [59] defend the poisoning attack by monitoring: 1) the convergence of the local model during training, 2) the angular distance of successive local model updates, and 3) removing local model updates from clients whose performance does not improve to defend against poisoning attacks.…”
Section: ) Performance Evaluationmentioning
confidence: 99%
“…Our proposed Performance Weighting scheme builds upon the work of Stripelis & Ambite (2020). Similar approaches were investigated by ; Wang et al (2020b). Specifically, in the authors aggregate the local models into sub-models and delegate their evaluation to learners that have a similar data distribution with the aggregated model, while in Wang et al (2020b) the authors evaluate the local models against a validation dataset that is hosted at the central server.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Similar approaches were investigated by ; Wang et al (2020b). Specifically, in the authors aggregate the local models into sub-models and delegate their evaluation to learners that have a similar data distribution with the aggregated model, while in Wang et al (2020b) the authors evaluate the local models against a validation dataset that is hosted at the central server. However, both proposed approaches used the validation-based accuracy as a detection mechanism to discard corrupted learners from the federation, whereas in our work we keep the corrupted models in the federation with a downgraded contribution value.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Therefore, reducing the adversarial relevance on the global model. Other confidence or score-based anomaly detection have been proposed [77], [129], [130].…”
Section: B Defending Integrity and Availabilitymentioning
confidence: 99%