2020
DOI: 10.35940/ijeat.d8703.049420
|View full text |Cite
|
Sign up to set email alerts
|

Text to Image Translation using Cycle GAN

Abstract: In the recent past, text-to-image translation was an active field of research. The ability of a network to know a sentence's context and to create a specific picture that represents the sentence demonstrates the model's ability to think more like humans. Common text--translation methods employ Generative Adversarial Networks to generate high-text-images, but the images produced do not always represent the meaning of the phrase provided to the model as input. Using a captioning network to caption generated imag… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…A model that has shown quite promise in a wide range of tasks is the Generative Adversarial Networks (GANs). Originally proposed by Goodfellow et al [12], who outlined the learning theory of GANs on a game theoretic scenario, these networks have since shown remarkable capability in diverse domains, such as image generation [64][65][66][67], text to image translation [68,69], and image to image translation [70][71][72]. In simple terms, a GAN consists of two networks: a generator and a discriminator.…”
Section: Introductionmentioning
confidence: 99%
“…A model that has shown quite promise in a wide range of tasks is the Generative Adversarial Networks (GANs). Originally proposed by Goodfellow et al [12], who outlined the learning theory of GANs on a game theoretic scenario, these networks have since shown remarkable capability in diverse domains, such as image generation [64][65][66][67], text to image translation [68,69], and image to image translation [70][71][72]. In simple terms, a GAN consists of two networks: a generator and a discriminator.…”
Section: Introductionmentioning
confidence: 99%