2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00929
|View full text |Cite
|
Sign up to set email alerts
|

A Sliced Wasserstein Loss for Neural Texture Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(25 citation statements)
references
References 15 publications
0
25
0
Order By: Relevance
“…The sliced Wasserstein loss has been shown to be a good statistical metric [HVCB21] to compare deep feature maps, but does not allow comparing feature maps with different numbers of samples (pixels) trivially. The "tag" trick introduced in the original paper cannot be applied to our goal for multi-target transfer.…”
Section: Spatial Control and Multi-target Transfermentioning
confidence: 99%
See 1 more Smart Citation
“…The sliced Wasserstein loss has been shown to be a good statistical metric [HVCB21] to compare deep feature maps, but does not allow comparing feature maps with different numbers of samples (pixels) trivially. The "tag" trick introduced in the original paper cannot be applied to our goal for multi-target transfer.…”
Section: Spatial Control and Multi-target Transfermentioning
confidence: 99%
“…For example, Gatys et al [GEB15,GEB16] leverage a pre-trained VGG neural network [SZ15] to guide style transfer, using the Gram Matrix of extracted deep features from the image as their statistical representation. Heitz et al [HVCB21] described an alternative sliced Wasserstein loss as a more complete statistical description of extracted features. Different approaches proposed to train a neural network to transfer style of images or synthesize textures in a single forward operation [JAFF16, ULVL16, HB17, ZZB * 18].…”
Section: Style Transfermentioning
confidence: 99%
“…The sliced Wasserstein (SW) distance has exhibited outstanding merit for training deep generative networks [9,46]. Recently, SW loss has been successfully applied in texture synthesis [17], image enhancement [8] and etc. Here, we also use SW loss L SW to optimize SelfDZSR, the detailed description will be given in the suppl.…”
Section: Learning Objectivementioning
confidence: 99%
“…They avoid computing OT problems in high dimensional space by projecting the distributions onto random 1-dimensional lines, and solving a 1-dimensional OT problem each time. Due to their simplicity and efficiency, this class of approaches have been applied to several fields in computer vision, such as generative modelling (Deshpande et al, 2018) and texture synthesis (Heitz et al, 2021).…”
Section: B Existing Computational Approaches Of Ot Problemmentioning
confidence: 99%