2021
DOI: 10.1109/tifs.2021.3108434
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Enhanced Federated Learning Against Poisoning Adversaries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
53
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 148 publications
(55 citation statements)
references
References 26 publications
2
53
0
Order By: Relevance
“…Free riding attacks are where an intruder benefits from the global model parameters without contributing with legitimate data samples. Data or model poisoning adversaries [47] aim to affect the global model to degrade its performance or make it perform in a certain way. In terms of IoT-IDS, the attacker can manipulate the global model to fail to detect certain attack techniques or tools which can be exploited in the future.…”
Section: Security Analysismentioning
confidence: 99%
“…Free riding attacks are where an intruder benefits from the global model parameters without contributing with legitimate data samples. Data or model poisoning adversaries [47] aim to affect the global model to degrade its performance or make it perform in a certain way. In terms of IoT-IDS, the attacker can manipulate the global model to fail to detect certain attack techniques or tools which can be exploited in the future.…”
Section: Security Analysismentioning
confidence: 99%
“…Since the global model in FL is obtained by averaging several local updates shared by participants, FL is under the risk of poisoning attacks. Malicious participants can conduct poisoning attack by uploading poisoned model updates to lower the accuracy of global model and negatively impact the convergence of FL [21].…”
Section: Poisoning Attackmentioning
confidence: 99%
“…In this work, malicious participants will conduct the label flipping attack as in [10], [21]. Specifically, attackers can label the data within one class as another class, and train the model to generate local updates based on the tampered datasets, causing the global model unable to distinguish the data of the two classes correctly.…”
Section: Poisoning Attackmentioning
confidence: 99%
“…The most common way to generate this kind of poisoning is by maliciously tampering the labels in the data [33], this can be easily achieved by just flipping labels, thus generating mislabeled data as shown in Figure 2. Label flipping can be performed either randomly or specifically depending on the aims of the attacker; the former aims to reduce the overall accuracy of all classes, the later does not aim to perform significant accuracy reduction, rather it is focus on the misclassification of a determined class in particular.…”
Section: A Label Flipping Attacksmentioning
confidence: 99%
“…Therefore, a robust FL model needs to regard on concerns related not only to data privacy, but also rely on a certain degree of resilience against poisoning attacks and data manipulations. Liu et al [33] addresses the privacy/defense related issue in FL models by showcasing a novel framework called privacy-enhanced FL (PEFL). PELF grants the central server the ability to detect malicious gradients and block poisoner workers.…”
Section: B Defenses In Federated Learningmentioning
confidence: 99%