2020
DOI: 10.1609/aaai.v34i07.6801
|View full text |Cite
|
Sign up to set email alerts
|

Domain Conditioned Adaptation Network

Abstract: Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 83 publications
(29 citation statements)
references
References 52 publications
0
29
0
Order By: Relevance
“…Comparison with previous methods Table 1 reports the performances of the proposed method compared with state-of-the-art depth predicting methods [19,22,23,24] and the most commonly-used domain adaptation algorithms [25,26,27]. For the source domain, it can be seen that our method is able to generate a depth map similar to the ground truth as shown in top row of Fig.…”
Section: Resultsmentioning
confidence: 96%
“…Comparison with previous methods Table 1 reports the performances of the proposed method compared with state-of-the-art depth predicting methods [19,22,23,24] and the most commonly-used domain adaptation algorithms [25,26,27]. For the source domain, it can be seen that our method is able to generate a depth map similar to the ground truth as shown in top row of Fig.…”
Section: Resultsmentioning
confidence: 96%
“…Class-level methods align the conditional distribution based on pseudo-labels (Chen et al, 2020a;Luo et al, 2020a;Li et al, 2020b;Jiang et al, 2020;Liang et al, 2020;Venkat et al, 2020). Conditional alignment methods (Xie et al, 2018;Long et al, 2018) minimize the discrepancy between conditional distributions.…”
Section: Related Workmentioning
confidence: 99%
“…Domain alignment methods include DAN (Long et al, 2015), DANN (Ganin et al, 2016), JAN (Long et al, 2017a). Class-level methods include conditional alignment methods (CDAN (Long et al, 2018), DCAN (Li et al, 2020b), ALDA (Chen et al, 2020b)), and contrastive methods (DRMEA (Luo et al, 2020a), ETD (Li et al, 2020c), DADA (Tang and Jia, 2020b), SAFN Xu et al (2019)). We only report available results in each baseline.…”
Section: Baselinesmentioning
confidence: 99%
“…DUCDA [54] develops an attention transfer mechanism for DA, which transfers the knowledge of discriminative patterns of source images to target. Differently, instead of exploring the space attention knowledge, DCAN [19] explores the low-level domaindependent knowledge in the channel attention.…”
Section: Related Workmentioning
confidence: 99%