2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00788
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Convolutions for Structure-Aware Style Transfer

Abstract: Figure 1: We propose Adaptive Convolutions (AdaConv), an extension of Adaptive Instance Normalization (AdaIN) for image style transfer, which is able to transfer both statistical and structural style elements. AdaConv can also be applied to generative models such as StyleGAN for photorealistic image synthesis on a multitude of datasets 1 .

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 47 publications
(41 citation statements)
references
References 30 publications
0
41
0
Order By: Relevance
“…As such their latent space is different between training and inversion while ours remains as it was. Also similar to our work is AdaConv [9] for style transfer, but our modified style injection stays much closer to the original StyleGAN and they do not address inversion.…”
Section: Previous Workmentioning
confidence: 86%
“…As such their latent space is different between training and inversion while ours remains as it was. Also similar to our work is AdaConv [9] for style transfer, but our modified style injection stays much closer to the original StyleGAN and they do not address inversion.…”
Section: Previous Workmentioning
confidence: 86%
“…Different from the traditional CNN, where the filter weights stay fixed once trained, the dynamic filters are generated by separate network branches and can be changed according to the input on-the-fly. Due to its adaptive nature, it can increase the flexibility of a network and has been applied to various tasks, such as super-resolution (Hu et al 2019), point cloud segmentation (Xu et al 2020), image deblurring (Lee et al 2021), and style transfer (Chandran et al 2021). However, generating such depthwise-separable and spatially-varying filters usually entailed memory intensive network architectures, which was computation-heavy and time-consuming.…”
Section: Dynamic Filtersmentioning
confidence: 99%
“…A popular practice [7], [16], [25], [46] directly aligns the statistics between the new artwork and the given style image in the instance feature space. Some works [5], [15], [35] learn to predict dynamic parameters for any specified style. Although such methods greatly improve the flexibility, the processing granularity is still full image and cannot meet the needs of all users.…”
Section: Neural Style Transfermentioning
confidence: 99%
“…N EURAL style transfer (NST) is a widely concerned artistic creation tool, which aims at transferring the artistic style from a reference image to a content image. Many impressive NST methods have been proposed by exploring content retention [7], [18], robustness analysis [9], [46], ceiling of style types [5], [16], geometric changes [28], brushstroke simulation [19], etc. Recently, several works have introduced the user's intention, making NST can be manually adjusted in the degree of stylization [14], semantic guidance [20], [30], or masked style splicing [40],…”
Section: Introductionmentioning
confidence: 99%