2020
DOI: 10.1145/3386569.3392427
|View full text |Cite
|
Sign up to set email alerts
|

Example-driven virtual cinematography by learning camera behaviors

Abstract: Designing a camera motion controller that has the capacity to move a virtual camera automatically in relation with contents of a 3D animation, in a cinematographic and principled way, is a complex and challenging task. Many cinematographic rules exist, yet practice shows there are significant stylistic variations in how these can be applied. In this paper, we propose an example-driven camera controller which can extract camera behaviors from an example film clip and re-apply the extracted behaviors t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 39 publications
(16 citation statements)
references
References 22 publications
0
12
0
Order By: Relevance
“…The movie data is extracted from the MovieNet dataset [19] which consists of 1,100 movies and 1,600,000 clips. We estimated the cinematic features from a subset of movie clips using the cinematic feature estimator [21] (a convolutional neural network which regresses the cinematic features from 2D skeleton motions). For the movie dataset, we first filter the sequences according to their number of characters and clip length.…”
Section: Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…The movie data is extracted from the MovieNet dataset [19] which consists of 1,100 movies and 1,600,000 clips. We estimated the cinematic features from a subset of movie clips using the cinematic feature estimator [21] (a convolutional neural network which regresses the cinematic features from 2D skeleton motions). For the movie dataset, we first filter the sequences according to their number of characters and clip length.…”
Section: Datasetmentioning
confidence: 99%
“…Collision avoidance Collision avoidance is always a critical issue when artists design animations since the desired camera motion may unexpectedly hit other objects in the virtual scene. Compared with the example-driven solution [21], which has to redundantly try multiple references clips to find one without collision, here the designer has the ability to avoid collision by easily inserting a keyframe or forcing the velocity to where no obstacles blind the camera's view. Frames with red camera icon at the corner refer to keyframe constraints.…”
Section: Keyframe Editing and Trajectory Generationmentioning
confidence: 99%
See 1 more Smart Citation
“…automated camera control in virtual environments: Automated camera placement (Lino & Christie, 2012) and motion planning (Li & Cheng, 2008;Yeh et al, 2011) have been studied extensively in virtual environments (Christie, Olivier, & Normand, 2008). A very recent work uses a deep learning approach to learn automated camera control from real film sequences (Jiang et al, 2020). These works share our goal of assisting users in the creation of camera motion and introduced the idea to define viewing constraints in image space (c.f., (Drucker & Zeltzer, 1994;Gleicher & Witkin, 1992;Lino & Christie, 2015;Lino et al, 2011)).…”
Section: Related Workmentioning
confidence: 99%
“…These works explore parameterizing desired shot qualities and controls [2][3][4] [5], feasibility of dynamic shots [6], and assigning sequences of shots [7] [8]. Other work as focused on algorithmic frameworks for helping directors achieve desirable aesthetic qualities of their shots [9] [10].…”
Section: Introductionmentioning
confidence: 99%