2022
DOI: 10.1109/tifs.2022.3169918
|View full text |Cite
|
Sign up to set email alerts
|

ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning

Abstract: Federated learning (FL) has become a popular tool for solving traditional Reinforcement Learning (RL) tasks. The multi-agent structure addresses the major concern of data-hungry in traditional RL, while the federated mechanism protects the data privacy of individual agents. However, the federated mechanism also exposes the system to poisoning by malicious agents that can mislead the trained policy. Despite the advantage brought by FL, the vulnerability of Federated Reinforcement Learning (FRL) has not been wel… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 101 publications
(19 citation statements)
references
References 54 publications
0
19
0
Order By: Relevance
“…Based on these data, we can draw the following conclusions. [38], [49], [51]. The biggest problem with the latter is that current research is based on the predictive accuracy of the global model on the validation dataset to determine whether an attack has been poisoned, and targeted poisoning attacks may evade this defense.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Based on these data, we can draw the following conclusions. [38], [49], [51]. The biggest problem with the latter is that current research is based on the predictive accuracy of the global model on the validation dataset to determine whether an attack has been poisoned, and targeted poisoning attacks may evade this defense.…”
Section: Discussionmentioning
confidence: 99%
“…The first one is a similarity-based malicious model detection algorithm, which considers that statistical methods (such as euclidean distance [40], [52], cosine similarity [37], [38], k-means [46], Pearson correlation coefficient [49], [51], etc.) can be used to measure the differences between benign and malicious models [36], [80].…”
Section: ) Similarity-based Methodsmentioning
confidence: 99%
See 3 more Smart Citations