2023
DOI: 10.3390/s23146287
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach

Abstract: Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding led us to develop a hypothesis that most classical machine learning models, such as random forest (RF), are immune to adversari… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
references
References 37 publications
(77 reference statements)
0
0
0
Order By: Relevance