2022
DOI: 10.3390/a15080283
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Training Methods for Deep Learning: A Systematic Review

Abstract: Deep neural networks are exposed to the risk of adversarial attacks via the fast gradient sign method (FGSM), projected gradient descent (PGD) attacks, and other attack algorithms. Adversarial training is one of the methods used to defend against the threat of adversarial attacks. It is a training schema that utilizes an alternative objective function to provide model generalization for both adversarial data and clean data. In this systematic review, we focus particularly on adversarial training as a method of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(16 citation statements)
references
References 52 publications
0
7
0
Order By: Relevance
“…Adversarial training is a technique used to improve the reliability and robustness of a model against intentional alterations in the data [ 55 ]. This is particularly important for the BERT model, which has a large number of parameters, as adversarial training helps prevent the model from overfitting the training data [ 56 ].…”
Section: Methodsmentioning
confidence: 99%
“…Adversarial training is a technique used to improve the reliability and robustness of a model against intentional alterations in the data [ 55 ]. This is particularly important for the BERT model, which has a large number of parameters, as adversarial training helps prevent the model from overfitting the training data [ 56 ].…”
Section: Methodsmentioning
confidence: 99%
“…Adversarial training has proven to be an effective defense approach in training deep learning models to increase adversary robustness and can thus be improved and used to deal with attacks in general [69]. This is because, unlike conventional models, an additional step is added to the training of the model, where in addition to clean data, contradictory data is also used, increasing the robustness of the model against adversarial attacks [70,71]. Thus, the concept of adversarial training can also be used in cyber defense systems, increasing the resilience of these defense systems in the Monitoring-Detection, Resistance-Absorption, Response-Adaptation phases and especially in the Learning-Optimization phase.…”
Section: Challenges and Future Directionsmentioning
confidence: 99%
“…One of the mitigation techniques that researchers in the area of adversarial learning have proposed to protect the targeted model from adversarial example attacks is adversarial training [26]. This technique consists in introducing adversarial examples into input data to deceive the deep neural network (DNN) model [27].…”
Section: Related Workmentioning
confidence: 99%