2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00135
|View full text |Cite
|
Sign up to set email alerts
|

SinGAN-GIF: Learning a Generative Video Model from a Single GIF

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…Comparison to other video generation methods: We further compare our method to recently published methods of diverse video generation from single video: HP-VAE-GAN [17] and SinGAN-GIF [3]. We show that our results are both qualitatively and quantitatively superior while reducing the runtime by a factor of ∼35, 000 (from 8 days training on one video to 18 seconds for new generated video).…”
Section: Resultsmentioning
confidence: 85%
See 3 more Smart Citations
“…Comparison to other video generation methods: We further compare our method to recently published methods of diverse video generation from single video: HP-VAE-GAN [17] and SinGAN-GIF [3]. We show that our results are both qualitatively and quantitatively superior while reducing the runtime by a factor of ∼35, 000 (from 8 days training on one video to 18 seconds for new generated video).…”
Section: Resultsmentioning
confidence: 85%
“…In the video domain, the main goal is to generate novel instances but with the same content as in the reference video. [3] extends [36] to video in a straightforward manner, by employing 3D (space-time) convolutions. [17] Combines Variational Auto Encoders for the coarse scales, thus preventing mode collapse, with GANs for the fine scale to achieve improved quality.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other side of the spectrum are single-video GANs. These video generative models train on a single input video, learn its distribution of space-time patches, and are then able to generate a diversity of new videos with the same patch distribution [4,17]. However, these take very long time to train for each input video, making them applicable to only small spatial resolutions and to very short videos (typically, very few small frames).…”
Section: Introductionmentioning
confidence: 99%