2016
DOI: 10.48550/arxiv.1602.07188
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring the Neural Algorithm of Artistic Style

Abstract: In this work we explore the method of style transfer presented in [1]. We first demonstrate the power of the suggested style space on a few examples.We then vary different hyper-parameters and program properties that were not discussed in [1], among which are the recognition network used, starting point of the gradient descent and different ways to partition style and content layers. We also give a brief comparison of some of the existing algorithm implementations and deep learning frameworks used.To study the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 4 publications
0
3
0
Order By: Relevance
“…Based on the calculation methods of content loss and style loss functions, it is known that the size of style weights directly affects the losses they produce [7]. Therefore, setting weights essentially means defining the initial losses.…”
Section: Experimental Designmentioning
confidence: 99%
See 1 more Smart Citation
“…Based on the calculation methods of content loss and style loss functions, it is known that the size of style weights directly affects the losses they produce [7]. Therefore, setting weights essentially means defining the initial losses.…”
Section: Experimental Designmentioning
confidence: 99%
“…By comparing this network with state-of-the-art alternatives, it is evident that AlexNet and GoogLeNet yield similar results, while VGG19, which focuses on depth, is similar to VGG16. Due to the use of large kernels and a stride of 1 in all convolutional layers, VGG networks effectively retain information [3]. Therefore, based on the outstanding performance of the VGG19 network in style transfer, this project selects it as the model to explore automatic weight matching for enhancing image generation quality.…”
Section: Introductionmentioning
confidence: 99%
“…Gatys et al [8,12] define a squared loss on the correlations between feature maps of some layers and synthesize natural textures of high perceptual quality using the pretrained CNN called VGG [3]. Gatys et al [13] then combine the loss on the correlations as a proxy to the style of a painting and the loss on the activations to represent the content of an image, and successfully create artistic images by converting the artistic style to the content image, inspiring several followups [14,15]. Another stream of visualization aims to understand what each neuron has learned in a pretrained network and synthesize an image that maximally activates individual features [5,9] or the class prediction scores [6].…”
Section: Introductionmentioning
confidence: 99%