2022
DOI: 10.1007/s00371-022-02404-6
|View full text |Cite
|
Sign up to set email alerts
|

CF-GAN: cross-domain feature fusion generative adversarial network for text-to-image synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 23 publications
0
4
0
Order By: Relevance
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
See 1 more Smart Citation
“…Also, MRP-GAN (Qi, Fan, et al, 2021), SAM-GAN (Peng et al, 2021), DM-GAN (M. Zhu, Pan, et al, 2019), DAE-GAN (Ruan et al, 2021), KT-GAN (Tan et al, 2021), Bridge-GAN (M. Yuan & Peng, 2020), CF-GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM-GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z.…”
Section: Text-to-image Translationmentioning
confidence: 99%
“…This GAN model takes both image and a text that describes an object to generate a new image containing this object. Also, MRP‐GAN (Qi, Fan, et al, 2021), SAM‐GAN (Peng et al, 2021), DM‐GAN (M. Zhu, Pan, et al, 2019), DAE‐GAN (Ruan et al, 2021), KT‐GAN (Tan et al, 2021), Bridge‐GAN (M. Yuan & Peng, 2020), CF‐GAN (Y. Zhang, Han, et al, 2022), DGattGAN (H. Zhang, Zhu, et al, 2021), PCCM‐GAN (Qi, Sun, et al, 2021), aRTIC GAN (Alati et al, 2022), and CDRGAN (M. Wang et al, 2021) were proposed to generate natural images based on a descriptive texts that describe these images. Likewise, Y. Zhou (2021), M. Z. Khan et al (2021), and Y. Zhou and Shimada (2021) proposed GAN models to synthesize face images based on the text describing these faces.…”
Section: Gan Applicationsmentioning
confidence: 99%
“…However, we contend that indirect effects on the environment and the science of ecology are likely going to be of greatest concern (Figure 1). First and foremost, we see risks associated with spreading misinformation via deep fakes (Zhang et al., 2023). Imagine central figures, environmental activists, representatives of environmental NGOs, politicians or ecology experts, for example, being depicted in compromising settings that cast doubt on their integrity, defaming them; thus undermining the validity of the message they send.…”
Section: Introductionmentioning
confidence: 99%
“…However, an emerging trend that is currently unfolding, and that requires in‐depth assessment, is the rise and refinement of text‐to‐image generative models and their multimodal counterparts. These generative models can transform textual prompts into detailed images or videos (Zhang et al., 2023), virtually indistinguishable from actual photos or other original work, including images of environmental content. Such capabilities, accessible without programming expertise, could affect all visual‐related aspects of ecological research and environmental advocacy.…”
Section: Introductionmentioning
confidence: 99%