2021
DOI: 10.1109/jiot.2020.3023126
|View full text |Cite
|
Sign up to set email alerts
|

PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
78
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(78 citation statements)
references
References 23 publications
0
78
0
Order By: Relevance
“…In label vector manipulation, the attackers can directly modify the labels of the training data into a targeted class, e.g., Label flipping attack [7], where some labels of training data (known as the "target" classes of the attacker) are flipped into another class to reduce the recognition performance of the target classes. Meanwhile, the attackers can also train a generative model for producing poisoning data [48]. On the other hand, the features of training data could be manipulated to achieve the goal of targeted data poisoning attack by input matrix manipulation [5], [9].…”
Section: Poisoning Attacksmentioning
confidence: 99%
“…In label vector manipulation, the attackers can directly modify the labels of the training data into a targeted class, e.g., Label flipping attack [7], where some labels of training data (known as the "target" classes of the attacker) are flipped into another class to reduce the recognition performance of the target classes. Meanwhile, the attackers can also train a generative model for producing poisoning data [48]. On the other hand, the features of training data could be manipulated to achieve the goal of targeted data poisoning attack by input matrix manipulation [5], [9].…”
Section: Poisoning Attacksmentioning
confidence: 99%
“…The authors further investigated adapting Krum and Reject on Negative Impact (RONI) [59] for defending Model Poisoning attacks, but with not enough success. Other approaches [60], [61] leveraged Generative Adversarial Networks (GANs) [62] for generating poisoned data. The created poisoned dataset is then used to train an adversarial model that reduced the attack's assumptions and increased the Model Poisoning attack's feasibility in real-world scenarios.…”
Section: A Targeting Integrity and Availabilitymentioning
confidence: 99%
“…Therefore, GANs may be split and use G autonomously. The authors in [60], [61] used GANs for datasets augmentation, enhancing the performance of Poisoning attacks. Similarly, an adversary might perform Inference attacks by reconstructing data using G [82].…”
Section: B Defending Integrity and Availabilitymentioning
confidence: 99%
“…First, Poisoning attacks seek to insert malevolent examples with erroneous annotation into the training data with the main aim to adjust the distribution of training data, thereby diminishing the discrimination power of the classifying different categories of system behaviors, which subsequently decline the model performance. Such attacks can be possibly initiated against deep learning models that require enthusiastically modernize their training data and learning parameters to cope with features of new attacks [62]. Second, the evasion attack in the basis of generating adversarial observations by adapting the attack structures to be somewhat dissimilar from the malevolent observations employed to train deep learning model; therefore, the possibility of detecting the attack get reduced, and even the attack evades the discovery, thereby dipping the performance of the system curiously [63].…”
Section: Interdependent Interrelated and Collaborative Ecosystemsmentioning
confidence: 99%
“…Later, this mined information is employed to carry out reversal engineering to acquire the confidential data of customers. Such an attack contravenes the customers' privacy by investigating their data, which are confidential under some instances (e.g., patients' medical data), injected in the deep learning training stage [62,65]. Therefore, the security of deep learning models that hold possible applications to protect IoT devices has to be realistically and satisfactorily protected against adversarial leakage of their gradient information.…”
Section: Interdependent Interrelated and Collaborative Ecosystemsmentioning
confidence: 99%