2017
DOI: 10.1145/3072959.3073653
|View full text |Cite
|
Sign up to set email alerts
|

Computational video editing for dialogue-driven scenes

Abstract: We present a system for efficiently editing video of dialogue-driven scenes. The input to our system is a standard film script and multiple video takes, each capturing a different camera framing or performance of the complete scene. Our system then automatically selects the most appropriate clip from one of the input takes, for each line of dialogue, based on a user-specified set of film-editing idioms. Our system starts by segmenting the input script into lines of dialogue and then splitting each input take i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
68
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 100 publications
(68 citation statements)
references
References 25 publications
0
68
0
Order By: Relevance
“…For example, in Drucker and Zeltzer (), the researchers use an A* planner to move a virtual camera in precomputed indoor simulation scenarios to avoid collisions with obstacles in 2D. More recently, we find works such as Leake, Davis, Truong, and Agrawala () that postprocesses videos of a scene taken from different angles by automatically labeling features of different views. The approach uses high‐level user‐specified rules which exploit the labels to automatically select the optimal sequence of viewpoints for the final movie.…”
Section: Related Workmentioning
confidence: 99%
“…For example, in Drucker and Zeltzer (), the researchers use an A* planner to move a virtual camera in precomputed indoor simulation scenarios to avoid collisions with obstacles in 2D. More recently, we find works such as Leake, Davis, Truong, and Agrawala () that postprocesses videos of a scene taken from different angles by automatically labeling features of different views. The approach uses high‐level user‐specified rules which exploit the labels to automatically select the optimal sequence of viewpoints for the final movie.…”
Section: Related Workmentioning
confidence: 99%
“…For example, in [18], the researchers use an A* planner to move a virtual camera in pre-computed indoor simulation scenarios to avoid collisions with obstacles in 2D. More recently, we find works such as [23] that post-processes videos of a scene taken from different angles by automatically labeling features of different views. The approach uses high-level user-specified rules which exploits the formerly labeled features in order to automatically select the optimal sequence of viewpoints for the final movie.…”
Section: A Arts and Computer Graphicsmentioning
confidence: 99%
“…Leake et al . [LDTA17] propose a computational video editing approach for dialogue‐driven scenes which utilizes the script and multiple video recordings of the scene to select the optimal recording that best satisfies user preferences (such as emphasize a particular character, intensify emotional dialogues, etc .). A more general effort was made by Galvane et al .…”
Section: Related Workmentioning
confidence: 99%
“…While human attention is influenced by bottom-up cues, it is also impacted by top-down cues relating to scene semantics such as faces, spoken dialogue, scene actions and emotions which are integral to the storyline [SSSM14, GSY * 15]. Leake et al [LDTA17] propose a computational video editing approach for dialoguedriven scenes which utilizes the script and multiple video recordings of the scene to select the optimal recording that best satisfies user preferences (such as emphasize a particular character, intensify emotional dialogues, etc.). A more general effort was made by Galvane et al [GRLC15] for continuity editing in the 3D animated sequences.…”
Section: Related Workmentioning
confidence: 99%