2022
DOI: 10.11591/ijai.v11.i3.pp961-968
|View full text |Cite
|
Sign up to set email alerts
|

Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising

Abstract: <p><span lang="EN-US">Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has been used to generate adversarial examples from the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…The specific case of contradictory examples challenges us, because generating patches which when placed in strategic places of the classifier's field of vision produces a targeted class [ 28 ] leading it to predict an erroneous class with a high degree of confidence. This ability to fool the system is made possible with the introduction of almost imperceptible noise.…”
Section: Related Workmentioning
confidence: 99%
“…The specific case of contradictory examples challenges us, because generating patches which when placed in strategic places of the classifier's field of vision produces a targeted class [ 28 ] leading it to predict an erroneous class with a high degree of confidence. This ability to fool the system is made possible with the introduction of almost imperceptible noise.…”
Section: Related Workmentioning
confidence: 99%
“…For each component of 𝐹 , 𝑓 𝑖 (𝒙) ∈ [0, 1] denotes the prediction score of 𝑖-th class. In ranking values, we define 𝑓 [𝑖 ] (𝒙) as the 𝑖-th largest element of 𝐹 (𝒙), that is, 𝑓 [1] (𝒙) β‰₯ 𝑓 [2] (𝒙) β‰₯ β€’ β€’ β€’ β‰₯ 𝑓 [𝑐 ] (𝒙). In the following content, we uniformly assume that the ties between any two prediction scores are completely broken, i.e., 𝑓 [1]…”
Section: Notationsmentioning
confidence: 99%
“…When 𝑖 = 1, Ξ” [1] is the largest difference since 𝑓 [𝑐 ] (𝒙 + 𝝐) is the smallest value among 𝑐 scores. It could be known that as 𝑖 increases, the value of 𝑓 [𝑐 βˆ’π‘–+1] (𝒙 +𝝐) increases consistently and the difference Ξ” [𝑖 ] reduces.…”
Section: Optimization Relaxationmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to defend against the threats posed by adversarial attacks to artificial intelligence security applications, researchers have investigated multiple network models for adversarial attack defense methods, which at this stage are mainly divided into three categories: (1) data preprocessing for adversarial examples; (2) enhancing the robustness of deep neural networks; and (3) detecting adversarial examples. Data preprocessing methods such as denoising ( Aneja et al, 2022 ; Xu et al, 2022 ) and data compression ( Chang et al, 2022 ; Zhang, Yi & Sang, 2022 ). The advantages of these methods are faster computation and no need to modify the network structure, the disadvantages are that denoising and data compression can cause loss of information in the image, the neural network cannot extract features adequately, which makes the neural network make wrong judgments.…”
Section: Introductionmentioning
confidence: 99%