2021
DOI: 10.48550/arxiv.2112.05423
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Security & Privacy in Federated Learning

Abstract: Advances in Machine Learning (ML) and its wide range of applications boosted its popularity. Recent privacy awareness initiatives as the EU General Data Protection Regulation (GDPR) -European Parliament and Council Regulation No 2016/679, subdued ML to privacy and security assessments. Federated Learning (FL) grants a privacy-driven, decentralized training scheme that improves ML models' security. The industry's fast-growing adaptation and security evaluations of FL technology exposed various vulnerabilities. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(12 citation statements)
references
References 115 publications
(167 reference statements)
0
12
0
Order By: Relevance
“…Integrity of training data is compromised by adversaries in data poisoning attacks and thus it reduces the overall accuracy of the model. Data poisoning attacks may happen in any one of these forms: clean-label or dirty-label [91], [92]. In cleanlabel attack, the adversary cannot change the labels of data in a dataset, but he places the poisoned data instance online and waits for the genuine client to label it and include it into their dataset.…”
Section: A Attacks Focused On Datamentioning
confidence: 99%
See 2 more Smart Citations
“…Integrity of training data is compromised by adversaries in data poisoning attacks and thus it reduces the overall accuracy of the model. Data poisoning attacks may happen in any one of these forms: clean-label or dirty-label [91], [92]. In cleanlabel attack, the adversary cannot change the labels of data in a dataset, but he places the poisoned data instance online and waits for the genuine client to label it and include it into their dataset.…”
Section: A Attacks Focused On Datamentioning
confidence: 99%
“…In cleanlabel attack, the adversary cannot change the labels of data in a dataset, but he places the poisoned data instance online and waits for the genuine client to label it and include it into their dataset. This type of attack happens when the data is collected from the untrusted sources and are therefore very difficult to identify as the poisoned instance is labelled correctly [91]. However, in dirty-label attack, the adversary changes the label of genuine instances of the dataset into the targeted ones.…”
Section: A Attacks Focused On Datamentioning
confidence: 99%
See 1 more Smart Citation
“…Integrity attack: It mainly refers to interfering with the prediction results of the model, making the output results of the model fail to meet the expected performance. 16 Attackers usually use targeted attack methods, such as backdoor attack and clean label attack.…”
Section: Attack Targetmentioning
confidence: 99%
“…Regarding SNNs, recent investigations have concluded that SNNs are also vulnerable to some of these attacks, i.e., adversarial examples [28,33,9] and backdoor attacks [2,1]. In the context of FL, security and privacy evaluations have also been in the scope of security experts [3,25], concluding that FL is vulnerable to privacy attacks, such as membership inference, and security attacks, such as backdoor attacks.…”
Section: Introductionmentioning
confidence: 99%