2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.308
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Generative Adversarial Nets with Singular Value Clipping

Abstract: In this paper, we propose a generative model, Temporal Generative Adversarial Nets (TGAN), which can learn a semantic representation of unlabeled videos, and is capable of generating videos. Unlike existing Generative Adversarial Nets (GAN)-based methods that generate videos with a single generator consisting of 3D deconvolutional layers, our model exploits two different types of generators: a temporal generator and an image generator. The temporal generator takes a single latent variable as input and outputs … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
333
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 319 publications
(336 citation statements)
references
References 35 publications
2
333
1
Order By: Relevance
“…[46] proposed generative adversarial networks for video with a spatial-temporal convolutional architecture that disentangles the scene's foreground from the background. TGAN [47] exploited a 1D temporal generator and a 2D image generator for video generation. The temporal generator takes a single latent variable as input and outputs a set of latent variables, while the image generator transforms these latent variables provided by the temporal generator into video frames.…”
Section: Related Workmentioning
confidence: 99%
“…[46] proposed generative adversarial networks for video with a spatial-temporal convolutional architecture that disentangles the scene's foreground from the background. TGAN [47] exploited a 1D temporal generator and a 2D image generator for video generation. The temporal generator takes a single latent variable as input and outputs a set of latent variables, while the image generator transforms these latent variables provided by the temporal generator into video frames.…”
Section: Related Workmentioning
confidence: 99%
“…In [43], [44] the authors address this by directly incorporating the time axis in the input and output. For instance, in [43] the authors propose a temporal generator while Yu et. al [44] propose a sequence generator that learns a stochastic policy.…”
Section: The Proposed Approachmentioning
confidence: 99%
“…Evaluating unconditional video generators. Borrowed from the image generation literature [20], the Inception Score (IS) has become one of the established metrics for quality assessment in videos [19,29,34]. IS incorporates the entropy of the class distributions obtained from a separately trained classifier.…”
Section: Related Workmentioning
confidence: 99%
“…Yet, the established experimental protocol evaluates only on video sequences of a fixed length. Indeed, some previous work [19,34] is even tailored to a pre-defined video length, both at training and at inference time.…”
Section: Related Workmentioning
confidence: 99%