2019
DOI: 10.1145/3306346.3323006
|View full text |Cite
|
Sign up to set email alerts
|

Stylizing video by example

Abstract: We introduce a new example-based approach to video stylization, with a focus on preserving the visual quality of the style, user controllability and applicability to arbitrary video. Our method gets as input one or more keyframes that the artist chooses to stylize with standard painting tools. It then automatically propagates the stylization to the rest of the sequence. To facilitate this while preserving visual quality, we developed a new type of guidance for state-of-art patch-based synthesis, that can be ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(47 citation statements)
references
References 54 publications
0
47
0
Order By: Relevance
“…See Figures 9 and 10 and our supplementary video for results in this scenario. As in the video stylization case when compared to other techniques [GEB16, KSS19, JST*19, TFK*20] our approach better preserves the style exemplar (c.f. Fig.…”
Section: Resultsmentioning
confidence: 92%
“…See Figures 9 and 10 and our supplementary video for results in this scenario. As in the video stylization case when compared to other techniques [GEB16, KSS19, JST*19, TFK*20] our approach better preserves the style exemplar (c.f. Fig.…”
Section: Resultsmentioning
confidence: 92%
“…This problem can be formulated within the context of Image Analogies [3], in which the goal is to stylize a target unstylized image B, given a pair of images A (un-stylized) and A' (stylized). The most common approach to tackle this problem has been via patch-based texture synthesis [7], [8], [9]. Nevertheless, recent approaches have leveraged the capabilities of deep latent spaces within convolutional neural networks to disentangle style from content [10].…”
Section: Related Workmentioning
confidence: 99%
“…The depth loss is computed as the per pixel loss between the depth estimated from the input content image and the stylized image, using a pre-trained single-image depth perception network [10]. Following the success of style transfer for 2D images, applications to other imaging modalities have emerged, such as video [11,12], omnidirectional imaging [13], or stereo imaging [14,15]. The video style transfer approach of Ruder et al [11] builds on the optimization approach of Gatys et al, enforcing the temporal consistency by initialising a stylized frame from a warped stylized previous frame and introducing a temporal con-sistency loss penalizing deviations between consecutive stylized frames.…”
Section: Introductionmentioning
confidence: 99%