2021
DOI: 10.48550/arxiv.2101.10904
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation

Abstract: Federated Learning (FL) is a paradigm in Machine Learning (ML) that addresses data privacy, security, access rights and access to heterogeneous information issues by training a global model using distributed nodes. Despite its advantages, there is an increased potential for cyberattacks on FL-based ML techniques that can undermine the benefits. Model-poisoning attacks on FL target the availability of the model. The adversarial objective is to disrupt the training. We propose attestedFL, a defense mechanism tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…FLTrust [80] ReLU cosine similarity, weighted mean High DnC [88] Random subsampling mean, numerical clipping High Ensemble [54] Group Training Voting low GAA [91] Credit scoring, validation dataset High attestedFL [92] Horizontal and vertical comparison test Medium…”
Section: Other Defense Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…FLTrust [80] ReLU cosine similarity, weighted mean High DnC [88] Random subsampling mean, numerical clipping High Ensemble [54] Group Training Voting low GAA [91] Credit scoring, validation dataset High attestedFL [92] Horizontal and vertical comparison test Medium…”
Section: Other Defense Methodsmentioning
confidence: 99%
“…attestedFL [92]. This solution uses a fusion of multiple methods to determine whether a client is malicious or benign through multiple comparisons.…”
Section: Other Model Aggregation Methodsmentioning
confidence: 99%
“…We can divide methods for detecting poisoning attacks based on evaluating model performance into two categories: evaluating local models [55], [57], [59], [62] and evaluating global models [56], [60], [63].…”
Section: ) Performance Evaluationmentioning
confidence: 99%
“…Yuao et al [57] updated the global model using only the local model that performed well on the test set and marked the client that uploaded a low-accuracy model as a malicious client. Furthermore, Mallah et al [59] defend the poisoning attack by monitoring: 1) the convergence of the local model during training, 2) the angular distance of successive local model updates, and 3) removing local model updates from clients whose performance does not improve to defend against poisoning attacks. In addition, Yi et al [62] proposed a method to automatically verify the validity of model updates through smart contracts in the blockchain, one of which is to test the performance of local models uploaded by users.…”
Section: ) Performance Evaluationmentioning
confidence: 99%
“…Therefore, reducing the adversarial relevance on the global model. Other confidence or score-based anomaly detection have been proposed [77], [129], [130].…”
Section: B Defending Integrity and Availabilitymentioning
confidence: 99%