2019
DOI: 10.1109/access.2019.2930203
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Process Training GAN for Identity-Preserving Face Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…The smooth function can be formulated as: GAN has been in a wide range of applications since its emergence. Generative approaches are being applied to validate machine learning models' robustness and to generate new data for rare examples and for image-to-image translation (Park et al 2019;Taigman et al 2017;Xu et al 2018), image super-resolution (Ledig et al 2017;Sønderby et al 2017), synthesis training (Brock et al 2019;Tang et al 2019), text-toimage synthesis (Hong et al 2018;Zhang et al 2017a, b, c), and many more. However, the training of generative models is very sensitive to the selected hyperparameters.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%
“…The smooth function can be formulated as: GAN has been in a wide range of applications since its emergence. Generative approaches are being applied to validate machine learning models' robustness and to generate new data for rare examples and for image-to-image translation (Park et al 2019;Taigman et al 2017;Xu et al 2018), image super-resolution (Ledig et al 2017;Sønderby et al 2017), synthesis training (Brock et al 2019;Tang et al 2019), text-toimage synthesis (Hong et al 2018;Zhang et al 2017a, b, c), and many more. However, the training of generative models is very sensitive to the selected hyperparameters.…”
Section: Generative Adversarial Networkmentioning
confidence: 99%