2020
DOI: 10.1145/3386569.3392453
|View full text |Cite
|
Sign up to set email alerts
|

Interactive video stylization using few-shot patch-based training

Abstract: In this paper, we present a learning-based method to the keyframe-based video stylization that allows an artist to propagate the style from a few selected keyframes to the rest of the sequence. Its key advantage is that the resulting stylization is semantically meaningful, i.e., specific parts of moving objects are stylized according to the artist's intention. In contrast to previous style transfer techniques, our approach does not require any lengthy pre-training process nor a large training dataset. We demon… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 48 publications
(46 citation statements)
references
References 23 publications
0
46
0
Order By: Relevance
“…The final image translation model can be used for real‐time stylization of a new video conference call that contains the same person and have similar lightihg conditions (target frames). Note that in contrast to the method of Texler et al [TFK*20] our approach better preserves style details and keeps the stylization more consistent in time (see also our supplementary video). Video frames and source style © Zuzana Studená, used with permission.…”
Section: Resultsmentioning
confidence: 82%
See 2 more Smart Citations
“…The final image translation model can be used for real‐time stylization of a new video conference call that contains the same person and have similar lightihg conditions (target frames). Note that in contrast to the method of Texler et al [TFK*20] our approach better preserves style details and keeps the stylization more consistent in time (see also our supplementary video). Video frames and source style © Zuzana Studená, used with permission.…”
Section: Resultsmentioning
confidence: 82%
“…As visible from the results and comparisons, our approach can better preserve style details during a longer time frame even if the scene structure changes considerably with respect to X. Also, note how the resulting stylized sequence has better temporal stability implicitly without performing any additional treatment, which contrasts with previous techniques [JST*19, TFK*20] that need to handle temporal consistency explicitly.…”
Section: Resultsmentioning
confidence: 89%
See 1 more Smart Citation
“…And the corresponding calling interface is designed for the classification system, which ensures the usability of the classification module. At the same time, a Classified Thesaurus is designed to preserve the unique feature words of each category and to prioritize the categories of documents to be classified [10].…”
Section: An Overview Of Text Classificationmentioning
confidence: 99%
“…This has found applications in colour adaptation [33] and in transferring the overall style of portrait images [36]. Convolutional neural networks (CNNs) were presented to transfer a painter's artistic style to natural images [11], which has inspired to further research in neural style transfer to improve quality and expand the scope of applications [22,23,25,38]. By adding a structural component to address UI details and usability, CNNs have been recently used to restyle GUIs with artistic images [8].…”
Section: Style Transfer and Layout Retargetingmentioning
confidence: 99%