2021
DOI: 10.48550/arxiv.2105.03592
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks

Abstract: Machine learning techniques have been widely applied to various applications. However, they are potentially vulnerable to data poisoning attacks, where sophisticated attackers can disrupt the learning procedure by injecting a fraction of malicious samples into the training dataset. Existing defense techniques against poisoning attacks are largely attack-specific: they are designed for one specific type of attacks but do not work for other types, mainly due to the distinct principles they follow. Yet few genera… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 35 publications
0
0
0
Order By: Relevance
“…Chen et al [22] proposed an attack-agnostic defense model against poisoning attacks. A poisoning attack is an attack on the training data that leads to misclassification.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Chen et al [22] proposed an attack-agnostic defense model against poisoning attacks. A poisoning attack is an attack on the training data that leads to misclassification.…”
Section: Related Workmentioning
confidence: 99%
“…The Discriminator should learn the real data, and the Generator should process the noise. The main objective is that the Discriminator should classify the real from fake data, whereas the Generator tries to fool the discriminator [22]. This, in turn, forms a feedback loop, thereby finally obtaining the synthetic or adversarial data via the Generator.…”
Section: Introductionmentioning
confidence: 99%