“…Data poisoning attacks have been demonstrated to many machine learning systems such as spam detection [32], [35], SVM [8], recommender systems [16], [17], [25], [45], neural networks [11], [18], [27], [30], [36], [37], and graph-based methods [20], [41], [49], as well as distributed privacy-preserving data analytics [10], [14]. FL is also vulnerable to data poisoning attacks [38], i.e., malicious clients can corrupt the global model via modifying, adding, and/or deleting examples in their local training datasets. For instance, a data poisoning attack known as label flipping attack changes the labels of the training examples on malicious clients while keeping their features unchanged.…”