2018
DOI: 10.48550/arxiv.1806.00804
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NAM: Non-Adversarial Unsupervised Domain Mapping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…A very different recent approach is NAM [5], which relies on having a high quality pre-trained unsupervised generative model for the source domain. If such a generator is available, a generative model needs to be trained only once per target dataset, and can thus be used to map to many target domains without adversarial generative training.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A very different recent approach is NAM [5], which relies on having a high quality pre-trained unsupervised generative model for the source domain. If such a generator is available, a generative model needs to be trained only once per target dataset, and can thus be used to map to many target domains without adversarial generative training.…”
Section: Related Workmentioning
confidence: 99%
“…From top to bottom, we show image-to-image translation results for: Apples ↔ Oranges and Summer ↔ Winter images. All of our results were achieved with the full loss in (5), with the relative weights reported in Section 3.5. Qualitatively, it may be observed that in all three image-toimage translation tasks, CrossNet outperforms CycleGAN, providing better texture transfer, color reproduction, and also better structure (visible in the Apples ↔ Oranges translations).…”
Section: Unpaired Image-to-image Translationmentioning
confidence: 99%
“…There has been a series of non-adversarial approaches to learning domain mappings. (Hoshen and Wolf [2018], Long et al [2015], , , Haeusser et al [2017]). However, all the aforementioned methods focus on the problem of large amounts of unlabelled data in the target domain.…”
Section: Introductionmentioning
confidence: 99%