Proceedings of the 2009 Symposium on Interactive 3D Graphics and Games 2009
DOI: 10.1145/1507149.1507182
|View full text |Cite
|
Sign up to set email alerts
|

Human video textures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
37
0

Year Published

2011
2011
2018
2018

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(37 citation statements)
references
References 18 publications
0
37
0
Order By: Relevance
“…Transition frames were manually identified in each input clip for the purpose of video duration extension via concatenation at transitions. However, it is possible to generate a greater variety of video-based pedestrian animations over a motion graph [5]. The video results illustrate the appearance of crowds constructed using crowd tubes before and after satisfying constraints, thus demonstrating the perceptual effect of increasing crowd density by relaxing collision and variety constraints.…”
Section: Summary Of Approachmentioning
confidence: 93%
See 1 more Smart Citation
“…Transition frames were manually identified in each input clip for the purpose of video duration extension via concatenation at transitions. However, it is possible to generate a greater variety of video-based pedestrian animations over a motion graph [5]. The video results illustrate the appearance of crowds constructed using crowd tubes before and after satisfying constraints, thus demonstrating the perceptual effect of increasing crowd density by relaxing collision and variety constraints.…”
Section: Summary Of Approachmentioning
confidence: 93%
“…In the spirit of video textures, we assume that infinite playback of V i is possible by computing transition points within V i and walking the motion graph comprised of a node per frame and a directed edge per transition. Recent work [5] presented an in-studio approach to generating human video textures, or controllable animations made from joint video and motion capture of human motion. For the purpose of investigating the constrained layout challenges facing video-based crowd synthesis, we assume that each V i has been preprocessed to identify transition frames and the motion graph is constrained to looping.…”
Section: Behavior and Density Controlmentioning
confidence: 99%
“…A 2D shape template is then fitted onto the character in the input image to drive the deformation according to projected 3D motion data. Flagg et al [Flagg et al 2009] also exploited the combination of 2D video and skeletal MoCap to generate controllable animations of human performance. 2D video and MoCap data are first synchronised.…”
Section: Related Workmentioning
confidence: 99%
“…Their system was demonstrated on several examples including a mouse controlled fish, whereby a mouse cursor was used to guide the path of the fish with different velocities. Similarly, Flagg et al [20] presented Human Video Textures, where, given a video of a martial artist performing various actions, they produce a photorealistic avatar which can be controlled, akin to a combat game character. Lee et al [21] used interactive controllers to animate an avatar from human motion captured data.…”
Section: Related Workmentioning
confidence: 99%