2019
DOI: 10.48550/arxiv.1909.13374
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep k-NN Defense against Clean-label Data Poisoning Attacks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…To defend against such attacks, Deep-kNN [20] removes malicious samples by comparing the class labels of each testing sample with its k neighbors, based on the intuition that poisoned samples have different feature representations than those of clean samples. In this sense, a sample would be regarded as poisoned if the majority of the k samples surrounded by the testing sample do not share the same class label as itself.…”
Section: A Targeted Clean-label Poisoning Attack (Tcl-attack)mentioning
confidence: 99%
See 3 more Smart Citations
“…To defend against such attacks, Deep-kNN [20] removes malicious samples by comparing the class labels of each testing sample with its k neighbors, based on the intuition that poisoned samples have different feature representations than those of clean samples. In this sense, a sample would be regarded as poisoned if the majority of the k samples surrounded by the testing sample do not share the same class label as itself.…”
Section: A Targeted Clean-label Poisoning Attack (Tcl-attack)mentioning
confidence: 99%
“…The defender could obtain different levels of knowledge of the target model and the training data, according to different threat models. For example, Deep-kNN [20] assumes the availability of the ground-truth labels to compare the class labels with each sample's k neighbors. CD [28] supposes the outliers do not have a strong effect on the target model in order to make an approximation about the upper bounds on the testing loss across poisoning attacks under non-convex settings.…”
Section: Threat Model and Defense Capabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 4 depicts a taxonomy of defenses against training-only and backdoor attacks according to methodology. 16), ( 30), ( 48), ( 49), ( 90), ( 92), ( 110), ( 119 23), ( 79), ( 101), (118), (150) Outliers in Input Space (39), ( 116), ( 117), (139) Figure 4: A taxonomy of defenses against training-only and backdoor attacks.…”
Section: Defenses Against Poisoning Attacksmentioning
confidence: 99%