2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.296
|View full text |Cite
|
Sign up to set email alerts
|

Abstract: We propose StyleBank, which is composed of multiple convolution filter banks and each filter bank explicitly represents one style, for neural image style transfer. To transfer an image to a specific style, the corresponding filter bank is operated on top of the intermediate feature embedding produced by a single auto-encoder. The StyleBank and the auto-encoder are jointly learnt, where the learning is conducted in such a way that the auto-encoder does not encode any style information thanks to the flexibility … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
269
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
2
2

Relationship

1
9

Authors

Journals

citations
Cited by 399 publications
(271 citation statements)
references
References 49 publications
2
269
0
Order By: Relevance
“…The advantage of explicitly defined style representation is that any input style can be modelled and transferred without requiring large paired dataset, thus suitable for the task of unsupervised one-shot transfer. [25], [26], [27], [28] has been proposed to speed up NST by training a feed-forward network to minimize the loss of NST. However, the efficiency is achieved at the expense of flexibility and quality.…”
Section: Benchmarking Existing Style Transfer Modelsmentioning
confidence: 99%
“…The advantage of explicitly defined style representation is that any input style can be modelled and transferred without requiring large paired dataset, thus suitable for the task of unsupervised one-shot transfer. [25], [26], [27], [28] has been proposed to speed up NST by training a feed-forward network to minimize the loss of NST. However, the efficiency is achieved at the expense of flexibility and quality.…”
Section: Benchmarking Existing Style Transfer Modelsmentioning
confidence: 99%
“…Speedup versions [50], [63] have managed to learn a network to generate any content images to still one style image. Until very recently, works like [64], [40] attempt to capture multiple styles in a single network and are able to transfer any content images to multiple styles. These style transferring approaches mainly employ the perception loss for style and content to train model directly, without any adversarial process.…”
Section: Model Sensitivitymentioning
confidence: 99%
“…To achieve a better quality for stylized images in high resolution, [34] propose a multimodal convolutional network, which performs a hierarchical stylization by utilizing multiple losses of increasing scales. Stylizing videos While these works have approached the task of style transfer for input photographs, others concentrated on transferring artistic style to videos [27,16,30,28], using feed-forward style transfer networks [4] or networks, which do not rely on optical flow at test time [16] to improve the consistency of stylization.…”
Section: Related Workmentioning
confidence: 99%