2021
DOI: 10.1155/2021/5573751
|View full text |Cite
|
Sign up to set email alerts
|

CPGAN :  An Efficient Architecture Designing for Text-to-Image Generative Adversarial Networks Based on Canonical Polyadic Decomposition

Abstract: Text-to-image synthesis is an important and challenging application of computer vision. Many interesting and meaningful text-to-image synthesis models have been put forward. However, most of the works pay attention to the quality of synthesis images, but rarely consider the size of these models. Large models contain many parameters and high delay, which makes it difficult to be deployed on mobile applications. To solve this problem, we propose an efficient architecture CPGAN for text-to-image generative advers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Recent works apply Neural Architecture Search (NAS) [52,53,54,55, 56, 57, 58] to automatically design efficient neural architectures. The above ideas can be successfully applied to accelerate the inference of GANs [10,59,60,61,14,62,15,16,17,18,19,63]. Although these methods have achieved prominent compression and speedup ratios, they all reduce the computation from the model dimension but fail to exploit the redundancy in the spatial dimension during image editing.…”
Section: Related Workmentioning
confidence: 99%
“…Recent works apply Neural Architecture Search (NAS) [52,53,54,55, 56, 57, 58] to automatically design efficient neural architectures. The above ideas can be successfully applied to accelerate the inference of GANs [10,59,60,61,14,62,15,16,17,18,19,63]. Although these methods have achieved prominent compression and speedup ratios, they all reduce the computation from the model dimension but fail to exploit the redundancy in the spatial dimension during image editing.…”
Section: Related Workmentioning
confidence: 99%