2020
DOI: 10.1007/978-3-030-58951-6_24
|View full text |Cite
|
Sign up to set email alerts
|

Data Poisoning Attacks Against Federated Learning Systems

Abstract: Federated learning (FL) is an emerging paradigm for distributed training of large-scale deep neural networks in which participants' data remains on their own devices with only model updates being shared with a central server. However, the distributed nature of FL gives rise to new threats caused by potentially malicious participants. In this paper, we study targeted data poisoning attacks against FL systems in which a malicious subset of the participants aim to poison the global model by sending model updates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
263
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 422 publications
(324 citation statements)
references
References 37 publications
(38 reference statements)
1
263
0
Order By: Relevance
“…Data poisoning attacks have been demonstrated to many machine learning systems such as spam detection [32], [35], SVM [8], recommender systems [16], [17], [25], [45], neural networks [11], [18], [27], [30], [36], [37], and graph-based methods [20], [41], [49], as well as distributed privacy-preserving data analytics [10], [14]. FL is also vulnerable to data poisoning attacks [38], i.e., malicious clients can corrupt the global model via modifying, adding, and/or deleting examples in their local training datasets. For instance, a data poisoning attack known as label flipping attack changes the labels of the training examples on malicious clients while keeping their features unchanged.…”
Section: B Poisoning Attacks To Federated Learningmentioning
confidence: 99%
“…Data poisoning attacks have been demonstrated to many machine learning systems such as spam detection [32], [35], SVM [8], recommender systems [16], [17], [25], [45], neural networks [11], [18], [27], [30], [36], [37], and graph-based methods [20], [41], [49], as well as distributed privacy-preserving data analytics [10], [14]. FL is also vulnerable to data poisoning attacks [38], i.e., malicious clients can corrupt the global model via modifying, adding, and/or deleting examples in their local training datasets. For instance, a data poisoning attack known as label flipping attack changes the labels of the training examples on malicious clients while keeping their features unchanged.…”
Section: B Poisoning Attacks To Federated Learningmentioning
confidence: 99%
“…Other researchers surveyed sub-problems of federated learning. For example, we find [ 18 ] focuses on personalization techniques in FL, while [ 19 ] focuses on data poisoning attacks against FL systems.…”
Section: Major Contributions Of the Papermentioning
confidence: 99%
“…Although FL appears to ensure that data remain on-premises, recent studies have shown that there is still the possibility of an actor exploiting the shared updates to extract confidential data, maliciously influencing the model output, or causing other harm such as model malfunctioning. Based on the timing of the attack in respect to the model life cycle, major attacks can be categorized into: Attacks taking place during the model aggregation phase [ 19 , 95 , 96 ] Attacks taking place after the model is deployed [ 97 , 98 , 99 , 100 , 101 , 102 ] …”
Section: Challenges Of Federated Learning and Relevant Research Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Data poisoning attacks can either be targeted or untargeted. Data poisoning attacks have been largely studied in the literature [93]- [96] and to counter such attacks, several defense strategies have been proposed in the literature [21]- [23]. These works proposed different aggregation rules that improve the robustness of the models against data poisoning attacks.…”
Section: A Background Related To Security Of Flmentioning
confidence: 99%