Proceedings of the 19th ACM International Conference on Multimedia 2011
DOI: 10.1145/2072298.2071933
|View full text |Cite
|
Sign up to set email alerts
|

Automatic motion-guided video stylization and personalization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…Cao et al [4] adapted the idea of image analogies [8] to achieve the multi-style video stylization system. This method divides the first stylized video frame into many blocks and then propagates these patches according to the optical flow field, which is optimized by occlusion detection and bilateral diffusion.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Cao et al [4] adapted the idea of image analogies [8] to achieve the multi-style video stylization system. This method divides the first stylized video frame into many blocks and then propagates these patches according to the optical flow field, which is optimized by occlusion detection and bilateral diffusion.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, this method needs to calculate the distance between the overlap regions of the adjacent blocks to determine whether re-rendering these regions is necessary. Unlike the algorithm in [4], we neither need to optimize the optical flow field nor calculate the correlation between blocks to ensure temporal coherence. We only need to detect the distorted texture layer area during texture advection and perform limited texture in-painting.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, none of these is fully satisfactory, causing either blending, sliding or 3D distortion artifacts. Cao et al [2011] use a comparable approach to stylize videos by relying on optical flow. Hashimoto et al [2003] and Haro [2003] applied the Image Analogies method to low-resolution video sequences, using motion estimation to increase temporal coherence.…”
Section: Synthesis For Animationmentioning
confidence: 99%
“…Traditional researches in the NPR area are usually validated by subjective/visual experiments, as in the research of Kang et al (2015), Wenhua et al (2015), Zhang et al (2013), Baugh and Kokaram (2010), Benard et al (2012), Borawski (2014), Cao et al (2011), Chen et al (2012), Gangopadhyay et al (2016) and Wang et al (2017). However, purely subjective/visual evaluation, without the use of external objective criteria and the lack of a representative sample of participants, weakens the statistical validity of the analyses performed.…”
Section: Introductionmentioning
confidence: 99%