2015
DOI: 10.48550/arxiv.1511.06381
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Manifold Regularized Deep Neural Networks using Adversarial Examples

Taehoon Lee,
Minsuk Choi,
Sungroh Yoon

Abstract: Learning meaningful representations using deep neural networks involves designing efficient training schemes and well-structured networks. Currently, the method of stochastic gradient descent that has a momentum with dropout is one of the most popular training protocols. Based on that, more advanced methods (i.e., Maxout and Batch Normalization) have been proposed in recent years, but most still suffer from performance degradation caused by small perturbations, also known as adversarial examples. To address th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 14 publications
0
7
0
Order By: Relevance
“…Wang et al [129], [122] developed adversary resistant neural networks by leveraging non-invertible data transformation in the network. Lee et al [106] developed manifold regularized networks that use a training objective to minimizes the difference between multi-layer embedding results of clean and adversarial images. Kotler and Wong [96] proposed to learn ReLU-based classifier that show robustness against small adversarial perturbations.…”
Section: Miscellaneous Approachesmentioning
confidence: 99%
“…Wang et al [129], [122] developed adversary resistant neural networks by leveraging non-invertible data transformation in the network. Lee et al [106] developed manifold regularized networks that use a training objective to minimizes the difference between multi-layer embedding results of clean and adversarial images. Kotler and Wong [96] proposed to learn ReLU-based classifier that show robustness against small adversarial perturbations.…”
Section: Miscellaneous Approachesmentioning
confidence: 99%
“…By learning a better g 1 : Methods like DNNs directly learn the feature extraction function g 1 . Table 4 summarizes multiple hardening solutions (Zheng et al, 2016;Miyato et al, 2016;Lee et al, 2015) in the DNN literature. They mostly aim to learn a better g 1 by minimizing different loss functions L f1 (x, x ) so that when d 2 (g 2 (x), g 2 (x )) < (approximated by (X, || • ||)), this loss L f1 (x, x ) is small.…”
Section: Towards Principled Solutionsmentioning
confidence: 99%
“…Multiple hardening solutions (Zheng et al, 2016;Miyato et al, 2016;Lee et al, 2015) exist in the DNN literature. They mostly aim to learn a better g 1 by minimizing different loss functions L f1 (x, x ) so that when d 2 (g 2 (x), g 2 (x )) < , this loss L f1 (x, x ) is small.…”
Section: Connecting To Previous Studies Hardening Dnnsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, such a system does not work properly when the data is of complicated structure. Considering the fact that the conventional deep learning features are able to better distinguish the between-class variability [24], [28], we attempt to breakthrough the restriction of simple Gaussian prior [44] into a better prior so as to tolerate large intra-class variations. Thus, the inter-class samples can be still well separated even with the large intra-class variance due to super discriminant capability of deep learning.…”
Section: Introductionmentioning
confidence: 99%