2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.01027
|View full text |Cite
|
Sign up to set email alerts
|

A Content Transformation Block for Image Style Transfer

Abstract: Style transfer has recently received a lot of attention, since it allows to study fundamental challenges in image understanding and synthesis. Recent work has significantly improved the representation of color and texture and computational speed and image resolution. The explicit transformation of image content has, however, been mostly neglected: while artistic style affects formal characteristics of an image, such as color, shape or texture, it also deforms, adds or removes content details. This paper explic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(30 citation statements)
references
References 30 publications
(70 reference statements)
0
30
0
Order By: Relevance
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Feature Disentanglement. In recent years, researches [5,7,11,13,14,19,29,31,32] used generative adversarial networks (GANs), which can be applied to style transfer task in some cases, to achieve image-to-image translation. A significant thought suitable for the style transfer task is that the style and content feature should be disentangled because of the domain deviation.…”
Section: Related Workmentioning
confidence: 99%
“…In recent years, some researchers [5,7,11,13,14,19,26,29,31,32] have used Generative Adversarial Networks (GANs) for high-quality image-to-image translation. GAN-based methods can generate high-quality artistic works that can be considered as real.…”
Section: Introductionmentioning
confidence: 99%
“…Arbitrary style transfer methods can be classified as either non-parametric [10,11,15,26,31,50,51,54] or parametric [14,19,20,25,29,37,43,44,46,49]. Non-parametric methods find similar patches between content and style images, and transfer style based on matched patches.…”
Section: Related Workmentioning
confidence: 99%
“…Neural style transfer (NST) refers to the generation of a pastiche image P from two images C and S via a neural network, where P shares the content with C but is in the style of S. While the original NST approach of Gatys [13] optimizes the transfer model for each pair of C and S, the field has rapidly evolved in recent years to develop models that support arbitrary styles out-of-the-box. NST models can, hence, be classified based on their stylization capacity into models trained for (1) a single combination of C and S [13,23,28,32,39], (2) one S [21,27,47,48], (3) multiple fixed S [2,9,24,30,42,55], and (4) infinite (arbitrary) S [4,14,15,17,19,20,25,29,31,37,43,44]. Intuitively, the category (4) of arbitrary style transfer (AST) is the most advantageous as it is agnostic to S, allowing trained models to be adopted for diverse novel styles without re-training.…”
Section: Introductionmentioning
confidence: 99%
“…Style-Aware AdaIN AAMS WCT Ours w/o ASM abstract concept, it is difficult to use quantitative metrics for comprehensive measurement. Based on this fact, we introduced two types of user studies, Style Deception Score, Semantic Retention Score, with reference to [20,21,32] to perceptually evaluate the effectiveness of our algorithm. Semantic Retention Ratio.…”
Section: Ours With Asm1mentioning
confidence: 99%