2017 IEEE International Conference on Computer Vision (ICCV) 2017
DOI: 10.1109/iccv.2017.270
|View full text |Cite
|
Sign up to set email alerts
|

Decoder Network over Lightweight Reconstructed Feature for Fast Semantic Style Transfer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
38
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(38 citation statements)
references
References 9 publications
0
38
0
Order By: Relevance
“…Jiang'17 [89] Fashion Chen'18 [72] 3D 2D Johnson'16 [47] Ulyanov'16 [48] Ulyanov'17 [50] Li'16 [52] Liu'17 [63] Depth Wang'17 [62] Jing'18 [61] Stroke Size Improvement Image Video Huang'17 [78] Gupta'17 [79] Chen'17 [80] Lu'17 [70] Semantic Non-Photorealistic Zhang'17 [87] Photorealistic Azadi'18 [83] Character Figure 2: A taxonomy of NST techniques. Our proposed NST taxonomy extends the IB-AR taxonomy proposed by Kyprianidis et al [14].…”
Section: Per-style-per-model Neural Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Jiang'17 [89] Fashion Chen'18 [72] 3D 2D Johnson'16 [47] Ulyanov'16 [48] Ulyanov'17 [50] Li'16 [52] Liu'17 [63] Depth Wang'17 [62] Jing'18 [61] Stroke Size Improvement Image Video Huang'17 [78] Gupta'17 [79] Chen'17 [80] Lu'17 [70] Semantic Non-Photorealistic Zhang'17 [87] Photorealistic Azadi'18 [83] Character Figure 2: A taxonomy of NST techniques. Our proposed NST taxonomy extends the IB-AR taxonomy proposed by Kyprianidis et al [14].…”
Section: Per-style-per-model Neural Methodsmentioning
confidence: 99%
“…Both [65] and [68] are based on IOB-NST algorithms and therefore leave much room for improvement. Lu et al [70] speed up the process by optimising the objective function in feature space, instead of in pixel space. More specifically, they propose to do feature reconstruction, instead of image reconstruction as previous algorithms do.…”
Section: Improvements and Extensionsmentioning
confidence: 99%
“…Another type of control is spatial, allowing users to ensure that certain regions of the output should be stylized using only features from a manually selected region of the style image (or that different regions of the output image should be stylized based on different style images). In [5,18] the authors propose forms of spatial control based on the user defining matched regions of the image by creating a dense mask for both the style and content image. We demonstrate that it is straightforward to incorporate this type of user-control into our formulation of style transfer.…”
Section: Stylementioning
confidence: 99%
“…Once F l o is achieved, we will get the output image I o by decoding F l o to image domain. The decoder can be pretrained for efficiency [30,20,33]. By contrast, we adopt an different decoder learning from theirs.…”
Section: Single Layer Optimizationmentioning
confidence: 99%