2021 International Joint Conference on Neural Networks (IJCNN) 2021
DOI: 10.1109/ijcnn52387.2021.9533495
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Attacks and Defense on Deep Learning Classification Models using YCbCr Color Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 24 publications
0
0
0
Order By: Relevance
“…No model has yet been able to resist adversarial perturbation while preserving state-of-the-art accuracy on clean inputs. However, several approaches to defending against small perturbations-based adversarial attacks and some novel training approaches have been proposed by researchers [14,15,[18][19][20][21][22][23][24][25][26]. Some of these works proposing methods to defend against adversarial attacks are briefly presented in the following.…”
Section: Related Workmentioning
confidence: 99%
“…No model has yet been able to resist adversarial perturbation while preserving state-of-the-art accuracy on clean inputs. However, several approaches to defending against small perturbations-based adversarial attacks and some novel training approaches have been proposed by researchers [14,15,[18][19][20][21][22][23][24][25][26]. Some of these works proposing methods to defend against adversarial attacks are briefly presented in the following.…”
Section: Related Workmentioning
confidence: 99%