2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00605
|View full text |Cite
|
Sign up to set email alerts
|

A Closed-Form Solution to Universal Style Transfer

Abstract: Universal style transfer tries to explicitly minimize the losses in feature space, thus it does not require training on any pre-defined styles. It usually uses different layers of VGG network as the encoders and trains several decoders to invert the features into images. Therefore, the effect of style transfer is achieved by feature transform. Although plenty of methods have been proposed, a theoretical analysis of feature transform is still missing. In this paper, we first propose a novel interpretation by tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
41
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 61 publications
(41 citation statements)
references
References 32 publications
(70 reference statements)
0
41
0
Order By: Relevance
“…• Furthermore, the texture in our stylized frames is more authentic, which validates that our method performs better than ASTV in maintaining the detailed content of the source video. Figure 4 shows two examples of two consecutive frames from CutBunny video that are stylized with Mondrian style image by the image style transfer methods (i.e., AdaIN [9] ,OST [13] and the base model [12]) and our method. The first two columns demonstrate the object motion and the last two columns show the static scene.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…• Furthermore, the texture in our stylized frames is more authentic, which validates that our method performs better than ASTV in maintaining the detailed content of the source video. Figure 4 shows two examples of two consecutive frames from CutBunny video that are stylized with Mondrian style image by the image style transfer methods (i.e., AdaIN [9] ,OST [13] and the base model [12]) and our method. The first two columns demonstrate the object motion and the last two columns show the static scene.…”
Section: Resultsmentioning
confidence: 99%
“…The MPI Sintel dataset provides multiple real-world scenarios, which contains 35 videos. The WikiArt dataset [15] is used as the style image dataset, consisting of 11,025 images, and all the test style images are from published implementations [13].…”
Section: Experiments 41 Datasetsmentioning
confidence: 99%
See 2 more Smart Citations
“…The transformed features are then fed forward into decoder layers to obtain the stylized image. To further improve [12] where only the style loss is considered, Lu et al [13] seek optimal style transfer to preserve image structures by considering the content loss. Li et al [14] propose a learnable linear transformation matrix based on arbitrary pairs of content and style images by two light-weighted CNNs.…”
Section: A Model Optimization Based Methodsmentioning
confidence: 99%