2021
DOI: 10.1155/2021/9969867
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Security Vulnerabilities of Deep Learning Models by Adversarial Attacks

Abstract: Nowadays, deep learning models play an important role in a variety of scenarios, such as image classification, natural language processing, and speech recognition. However, deep learning models are shown to be vulnerable; a small change to the original data may affect the output of the model, which may incur severe consequences such as misrecognition and privacy leakage. The intentionally modified data is referred to as adversarial examples. In this paper, we explore the security vulnerabilities of deep learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 13 publications
(13 reference statements)
0
1
0
Order By: Relevance
“…These models have multiple layers; lower layers handle essential elements, and higher layers manage more abstract aspects. Notably, the number of layers in these models impacts their accuracy and security; increased complexity, as indicated by more layers, can make the models more susceptible to adversarial attacks, posing potential security risks [77].…”
Section: Deep Learningmentioning
confidence: 99%
“…These models have multiple layers; lower layers handle essential elements, and higher layers manage more abstract aspects. Notably, the number of layers in these models impacts their accuracy and security; increased complexity, as indicated by more layers, can make the models more susceptible to adversarial attacks, posing potential security risks [77].…”
Section: Deep Learningmentioning
confidence: 99%
“…The continuous upgrading of DNNs provides an opportunity to efficiently process the enormous unstructured data generated by the wide-spreading imaging sensors in IoT systems [1,2]. However, recent studies [3][4][5] have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks, which apply subtle and unperceivable perturbations to input examples and can completely fool the deep learning model. According to different attack settings, adversarial attacks have developed various types of attacks, such as white-box attacks [6] and black-box attacks [7].…”
Section: Introductionmentioning
confidence: 99%