2022
DOI: 10.1049/cje.2021.00.126
|View full text |Cite
|
Sign up to set email alerts
|

Backdoor Attacks on Image Classification Models in Deep Neural Networks

Abstract: Deep neural network (DNN) is applied widely in many applications and achieves state-of-the-art performance. However, DNN lacks transparency and interpretability for users in structure. Attackers can use this feature to embed trojan horses in the DNN structure, such as inserting a backdoor into the DNN, so that DNN can learn both the normal main task and additional malicious tasks at the same time. Besides, DNN relies on data set for training. Attackers can tamper with training data to interfere with DNN traini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 64 publications
(88 reference statements)
0
14
0
Order By: Relevance
“…These studies demonstrate the importance of developing efficient and sustainable solutions for video surveillance, which can have significant implications for various applications, such as security and public safety. Zhang et al [46] conducted research on backdoor attacks on deep neural network models used in image classification. They analyzed the impact of these attacks on classification accuracy and proposed a defense mechanism to mitigate them.…”
Section: Related Workmentioning
confidence: 99%
“…These studies demonstrate the importance of developing efficient and sustainable solutions for video surveillance, which can have significant implications for various applications, such as security and public safety. Zhang et al [46] conducted research on backdoor attacks on deep neural network models used in image classification. They analyzed the impact of these attacks on classification accuracy and proposed a defense mechanism to mitigate them.…”
Section: Related Workmentioning
confidence: 99%
“…Zhang et al [22] proposed a generalized attack framework and classified all the work on backdoors as a subtype of this generalized framework. This work categorized backdoor attacks into two distinct attack types, i.e., poisoning-based backdoor attacks and non-poisoning-based backdoor attacks.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the existing works [15,16] on backdoor attacks are in the area of image and text and few studies on graph backdoor attacks [14,[17][18][19]. However, GNNs are also vulnerable to backdoor attacks.…”
Section: Introductionmentioning
confidence: 99%