2023
DOI: 10.3390/e25060933
|View full text |Cite
|
Sign up to set email alerts
|

Robustness of Sparsely Distributed Representations to Adversarial Attacks in Deep Neural Networks

Nida Sardar,
Sundas Khan,
Arend Hintze
et al.

Abstract: Deep learning models have achieved an impressive performance in a variety of tasks, but they often suffer from overfitting and are vulnerable to adversarial attacks. Previous research has shown that dropout regularization is an effective technique that can improve model generalization and robustness. In this study, we investigate the impact of dropout regularization on the ability of neural networks to withstand adversarial attacks, as well as the degree of “functional smearing” between individual neurons in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
references
References 46 publications
0
0
0
Order By: Relevance