2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8545812
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Dataset Data Augmentation for Convolutional Neural Networks Training

Abstract: Within modern Deep Learning setups, data augmentation is the weapon of choice when dealing with narrow datasets or with a poor range of different samples. However, the benefits of data augmentation are abysmal when applied to a dataset which is inherently unable to cover all the categories to be classified with a significant number of samples. To deal with such desperate scenarios, we propose a possible last resort: Cross-Dataset Data Augmentation. That is, the creation of new samples by morphing observations … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

4
3

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…In terms of classification, traditional word representation methods such as TF-IDF (and, to a lesser extent, word embeddings) have been widely utilized as input features for traditional classification methods such as decision trees [49,50], support vector machines [51,52], and probabilistic graphical models (e.g., naive Bayes and hidden Markov models) [53,54]. The same can be said about word embeddings, which have, however, seen much greater use with specialized neural network architectures such as convolutional neural networks (CNNs) [55][56][57][58] and RNNs [59][60][61]. Transformer-based LMs such as the Bidirectional Encoder Representations from Transformers (BERT) [34] and Generative Pre-trained Transformer (GPT) [36], on the other hand, have showcased outstanding classification results by passing the contextualized embeddings through a simple feed-forward layer.…”
Section: Classificationmentioning
confidence: 99%
“…In terms of classification, traditional word representation methods such as TF-IDF (and, to a lesser extent, word embeddings) have been widely utilized as input features for traditional classification methods such as decision trees [49,50], support vector machines [51,52], and probabilistic graphical models (e.g., naive Bayes and hidden Markov models) [53,54]. The same can be said about word embeddings, which have, however, seen much greater use with specialized neural network architectures such as convolutional neural networks (CNNs) [55][56][57][58] and RNNs [59][60][61]. Transformer-based LMs such as the Bidirectional Encoder Representations from Transformers (BERT) [34] and Generative Pre-trained Transformer (GPT) [36], on the other hand, have showcased outstanding classification results by passing the contextualized embeddings through a simple feed-forward layer.…”
Section: Classificationmentioning
confidence: 99%
“…Nowadays, learning-based techniques are employed in a variety of applications, being able to solve even complex problems [ 25 , 26 , 27 ]. Researchers have also investigated the creation of HDR using multiple LDR images by designing deep learning methods.…”
Section: Related Workmentioning
confidence: 99%
“…This paper mainly discussed about the comparision of accuracy results for augmented and un augmented dataset. In [5] The additional samples are given by mapping elements from a different pool rather than from the dataset itself, according to a new data augmentation approach proposed. Cross-Dataset Data Augmentation was proposed and demonstrated sucessfully.…”
Section: Literature Surveymentioning
confidence: 99%