2021 IEEE Winter Conference on Applications of Computer Vision (WACV) 2021
DOI: 10.1109/wacv48630.2021.00113
|View full text |Cite
|
Sign up to set email alerts
|

Real-time Localized Photorealistic Video Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 24 publications
(8 citation statements)
references
References 29 publications
0
7
0
Order By: Relevance
“…Other works adopt particular style transfer operators [17,19,28,29,37,59] to the Encoder-Decoder pre-trained on natural image dataset, e.g., COCO [33]. The advances [60,61] learn the edge-preserved local affine grid [12] in bilateral space [5] to transfer the color locally. Because the input size and semantic mask heavily influence these methods, directly applying to 3D scene will produce distortion and inconsistency.…”
Section: Style Transfer Methodsmentioning
confidence: 99%
“…Other works adopt particular style transfer operators [17,19,28,29,37,59] to the Encoder-Decoder pre-trained on natural image dataset, e.g., COCO [33]. The advances [60,61] learn the edge-preserved local affine grid [12] in bilateral space [5] to transfer the color locally. Because the input size and semantic mask heavily influence these methods, directly applying to 3D scene will produce distortion and inconsistency.…”
Section: Style Transfer Methodsmentioning
confidence: 99%
“…For blind approaches without dedicated guiding style reference, this can be done using optical flow to calculate temporal losses [Chen et al , 2020 or align intermediate feature representations [Gao et al 2018; to stabilize models' prediction across nearby video frames. Recently, there have been efforts to improve consistency and speed for video style transfer for arbitrary styles through temporal regularization [Wang et al 2020a], multi-channel correlation , and bilateral learning [Xia et al 2021]. Similarly, style transfer for stereo images [Chen et al 2018;Gong et al 2018] also aims to achieve cross-view consistency by using dense pixel correspondences (via stereo matching) constraints.…”
Section: Related Work 21 Image and Video Style Transfermentioning
confidence: 99%
“…The latest techniques of neural style transfers can achieve real-time processing of videos. The videos of virtual or physical worlds can be alternated by multitudinous user-defined styles [96] (Figure 7a), which achieve high reasonable granularity of processing the video scenes by providing multiple styles for different object classes [214] (Figure 7b). This implies that metaverse creators can choose their preferred styles, and offer highly personalised yet artistic experiences to other metaverse individuals, due to the nature of cinema-alike metaverse.…”
Section: Virtual Photography / Cinematic Simulationmentioning
confidence: 99%