2017
DOI: 10.1109/access.2017.2696121
|View full text |Cite
|
Sign up to set email alerts
|

Smart Augmentation Learning an Optimal Data Augmentation Strategy

Abstract: A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks(DNN). There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method which we call Smart Augmentation and we show how to use it to increase the accuracy and reduce overfitting on a target network. Smart Augmentation works by creating a network that learns how … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
196
0
4

Year Published

2017
2017
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 350 publications
(218 citation statements)
references
References 11 publications
0
196
0
4
Order By: Relevance
“…The 90% percent was divided into 80% for training and 20% for the validation. The selection of 80% for the training and 20% in the validation proved it is efficient in many types of research such as [53][54][55][56][57]. The training data then divided into mini-batches, each of size = 64, such that ( 9 ; 9 ) ∈ (X /0123 ; Y /0123 ); = 1,2, … , > ?…”
Section: The Proposed Modelmentioning
confidence: 99%
“…The 90% percent was divided into 80% for training and 20% for the validation. The selection of 80% for the training and 20% in the validation proved it is efficient in many types of research such as [53][54][55][56][57]. The training data then divided into mini-batches, each of size = 64, such that ( 9 ; 9 ) ∈ (X /0123 ; Y /0123 ); = 1,2, … , > ?…”
Section: The Proposed Modelmentioning
confidence: 99%
“…Data augmentation often includes the application of blurring, rotation, and translation to existing images that allow a network to generalize better. However, not all data augmentation methods can improve performance, and should not be used "blindly" [27]. For example, on MNIST (a Latin handwritten number database), if rotation is used, the network will be unable to distinguish accurately between handwritten "6" and "9" digits [21].…”
Section: Error Criteriamentioning
confidence: 99%
“…However, many special training tricks hinder its real application. Neural Augmentation [30] and Smart Augmentation methods [31] teach the neural network autonomous learning how to generate new samples by minimizing the error of that network. e appearance of Generative Adversarial Networks (GANs) provides a new research direction for data augmentation.…”
Section: Related Workmentioning
confidence: 99%