2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00603
|View full text |Cite
|
Sign up to set email alerts
|
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
579
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 572 publications
(581 citation statements)
references
References 51 publications
0
579
0
2
Order By: Relevance
“…The first is in using gestures as a representation for video analysis: co-speech hand and arm motion make a natural target for video prediction tasks. The second is using in-the-wild gestures as a way of training conversational agents: we presented one way of visualizing gesture predictions, based on GANs [10], but, following classic work [8], these predictions could also be used to drive the motions of virtual agents. Finally, our method is one of only a handful of initial attempts to predict motion from audio.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…The first is in using gestures as a representation for video analysis: co-speech hand and arm motion make a natural target for video prediction tasks. The second is using in-the-wild gestures as a way of training conversational agents: we presented one way of visualizing gesture predictions, based on GANs [10], but, following classic work [8], these predictions could also be used to drive the motions of virtual agents. Finally, our method is one of only a handful of initial attempts to predict motion from audio.…”
Section: Resultsmentioning
confidence: 99%
“…Since our work studies personalized gestures for in-the-wild videos, where 3D data is not available, we use a data-driven synthesis approach inspired by Bregler et al [2]. To do this, we employ the pose-to-video method of Chan et al [10], which uses a conditional generative adversarial network (GAN) to synthesize videos of human bodies from pose.…”
Section: Conversational Agentsmentioning
confidence: 99%
See 1 more Smart Citation
“…Sun et al [41] propose a two-stage framework to perform head inpainting conditioned on the generated facial landmark in the first stage. Chan et al [5] propose a method to transfer motion between human subjects based on pose stick figures in different videos. Yan et al [54] propose a method to generate human motion sequence with a simple background using CGAN and human skeleton information.…”
Section: Related Workmentioning
confidence: 99%
“…In Chan et al [15] temporal information is introduced in a cGAN architecture by adding extra conditions on their generator and discriminator. The current input and previously generated image are conditions for their generator.…”
Section: B Network Architecturementioning
confidence: 99%