2021
DOI: 10.1109/tpami.2019.2936378
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attack Type I: Cheat Classifiers by Significant Changes

Abstract: Despite the great success of deep neural networks, the adversarial attack can cheat some well-trained classifiers by small permutations. In this paper, we propose another type of adversarial attack that can cheat classifiers by significant changes. For example, we can significantly change a face but well-trained neural networks still recognize the adversarial and the original example as the same person. Statistically, the existing adversarial attack increases Type II error and the proposed one aims at Type I e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
22
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 30 publications
(82 reference statements)
0
22
0
Order By: Relevance
“…Most researches [19,28] discuss adversarial attack in image classification tasks, where invisible perturbations are created to make the network give wrong predictions. But, in scene text spotting tasks, the target is to detect and recognize texts in scene images, instead of classifying the input image.…”
Section: Adversarial Attackmentioning
confidence: 99%
See 1 more Smart Citation
“…Most researches [19,28] discuss adversarial attack in image classification tasks, where invisible perturbations are created to make the network give wrong predictions. But, in scene text spotting tasks, the target is to detect and recognize texts in scene images, instead of classifying the input image.…”
Section: Adversarial Attackmentioning
confidence: 99%
“…The proposed method is called AdvMix, which integrates the adversarial attack and mixup [18]. Adversarial attack [19], e.g., projected gradient descent (PGD) attack [20], can generate new realistic samples as shown in Fig. 2(b).…”
Section: Introductionmentioning
confidence: 99%
“…Since the proposal of adversarial attack [14], its design, defense, and analysis have attracted much attention recently. Except for a new type of adversarial attack [33], the majority of the existing adversarial attacks aim at the over-sensitive part of a neural network such that slight distortions on the input lead to significant changes on the output.…”
Section: Autoencoder and Its Attackmentioning
confidence: 99%
“…However, the model may perform abnormally as the data is slightly changed, as its inner decision-making process is unknown. Some recent examples can be referred to as the adversarial attacks, where a convolutional neural network can be easily fooled by its attackers (Yuan et al, 2019, Tang et al, 2019, Su et al, 2019.…”
Section: Introductionmentioning
confidence: 99%
“…In xNN, 2 a fully-connected multi-layer perceptron is disentangled into many additive and independent subnetworks; each subnetwork represents a shape function which can be easily visualized and interpreted. Recently, the interpretability of xNN is further enhanced by inducing sparsity, orthogonality and smoothness constraints, see details in (Yang et al, 2019).…”
Section: Introductionmentioning
confidence: 99%