2017
DOI: 10.48550/arxiv.1705.01314
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…The third family performs both marginal-distribution-difference elimination and syntheticdata generation, such as Cross-Domain Representation Disentangler (CDRD) [15], Synthesized Examples for Generalized Zero-Shot Learning (SE-GZSL) [30], Disentangled Synthesis for Domain Adaptation (DiDA) [6], Attribute-Based Synthetic Network (ABS-Net) [18], among others. Madras et al [20] proposed such a FML framework.…”
Section: Hybrid Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The third family performs both marginal-distribution-difference elimination and syntheticdata generation, such as Cross-Domain Representation Disentangler (CDRD) [15], Synthesized Examples for Generalized Zero-Shot Learning (SE-GZSL) [30], Disentangled Synthesis for Domain Adaptation (DiDA) [6], Attribute-Based Synthetic Network (ABS-Net) [18], among others. Madras et al [20] proposed such a FML framework.…”
Section: Hybrid Methodsmentioning
confidence: 99%
“…For the second thrust that generates data with unseen class, domain combinations, we chose ELEGANT [35] which only uses domain labels and ML-VAE [5] which only uses class labels. For the third thrust that uses hybrid solutions, we chose ABS-Net [18] which is the base method of ours without an adversarial mechanism, and CDRD [15] and SE-GZSL [30], which can be treated as advanced instantiated algorithms under the FML framework of Madras et al [20]. Finally, we compare the direct learning strategy that stacks P, G 1 , and D 11 as the whole network.…”
Section: Methods For Comparisonmentioning
confidence: 99%
“…This provides a considerable improvement for some cross-domain recognition tasks [37,23,34,26,30,21,6,9]. Specifically, a number of deep domain adaptation models have applied the adversarial training strategy [35,36,8,21,5,20,22]. DANN [8] employs a gradient reversal layer between the feature layer and the domain discriminator, causing feature representation to anti-learn the domain difference and hence adapt well to the target domain.…”
Section: Related Workmentioning
confidence: 99%
“…Various proposals for more structure imposing regularization have been made, either with some sort of supervision (e.g. Siddharth et al, 2017;Bouchacourt et al, 2017;Liu et al, 2017;Mathieu et al, 2016;Cheung et al, 2014) or completely unsupervised (e.g. Higgins et al, 2017;Kim & Mnih, 2018;Chen et al, 2018;Kumar et al, 2018;Esmaeili et al, 2018).…”
Section: Related Workmentioning
confidence: 99%