ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
DOI: 10.1109/icassp40776.2020.9054677
|View full text |Cite
|
Sign up to set email alerts
|

Sampling Strategies for GAN Synthetic Data

Abstract: Generative Adversarial Networks (GANs) have been used widely to generate large volumes of synthetic data. This data is being utilized for augmenting with real examples in order to train deep Convolutional Neural Networks (CNNs). Studies have shown that the generated examples lack sufficient realism to train deep CNNs and are poor in diversity. Unlike previous studies of randomly augmenting the synthetic data with real data, we present our simple, effective and easy to implement synthetic data sampling methods … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1
1

Relationship

2
7

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 36 publications
(53 reference statements)
0
9
0
Order By: Relevance
“…Considering the continuous high-dimensional hand pose space with or without objects, if we sub-sample uniformly and at minimum, for instance, 10 2 (azimuth/elevation angles)×2 5 (articulation)×10 1 (shape)×10 1 (object) = 320K, the number is already very large, causing a huge compromise issue for memory and training GPU hours. Random sampling was applied without a prior on the data distribution or smart sampling techniques [4,2]. BT generates synthetic images with objects and hands similar to [20] by randomly placing the objects from [8] to nearby hand locations without taking into account the hand and object interaction.…”
Section: Evaluated Methodsmentioning
confidence: 99%
“…Considering the continuous high-dimensional hand pose space with or without objects, if we sub-sample uniformly and at minimum, for instance, 10 2 (azimuth/elevation angles)×2 5 (articulation)×10 1 (shape)×10 1 (object) = 320K, the number is already very large, causing a huge compromise issue for memory and training GPU hours. Random sampling was applied without a prior on the data distribution or smart sampling techniques [4,2]. BT generates synthetic images with objects and hands similar to [20] by randomly placing the objects from [8] to nearby hand locations without taking into account the hand and object interaction.…”
Section: Evaluated Methodsmentioning
confidence: 99%
“…Future research should address the deficiencies mentioned above, dealing with the computational complexity of the solutions devised, and also addressing hardware-specific concerns that may affect their final performance, generalizing the study of energy consumption (approached punctually in [53,57,83]), as well as other interesting related matters, such as the proper use of parallelism strategies or how to exploit jointly modern multi-core architectures and AI acceleration hardware in a proper way. Likewise, in order to overcome data scarcity, it will be necessary to either explore techniques to alleviate or streamline the dataset creation process (e.g., synthetic data generation based on Generative Adversarial Networks [130,131], or image and video acquisition in simulated environments [132]), or devise DL alternatives that demand a smaller volume of data (e.g., the so-called few-shot learning techniques [133,134]). Finally, although the body of works considered in the study represents a broad spectrum of applications within ambient intelligence, it does not cover paradigmatic scenarios in the field, such as workplaces, educational centers, or smart homes.…”
Section: Discussionmentioning
confidence: 99%
“…In our modelling first we changed the real/fake hard labels in our discriminator pipeline to soft labels. We also added noise by randomly flippling the labels, this to help boost the generator, as described in Bhattarai et al (2020). The GAN literature recommended using LeakyLeLU for both the generator and discriminator models.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%