2021
DOI: 10.1109/tpami.2020.2964173
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Correction Network for Partial Domain Adaptation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
79
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 132 publications
(79 citation statements)
references
References 34 publications
0
79
0
Order By: Relevance
“…Another example is RTN [38] that considers feature fusion with MMD and designs a residual function to perform classifier adaptation. Further, DRCN in [24] utilizes residual correction block to explicitly mitigate the domain feature gap. Apart from MMD, Zhang et al [22] define a new divergence, Margin Disparity Discrepancy (MDD), and validate that it has rigorous generalization bounds.…”
Section: Discrepancy Metric Minimizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Another example is RTN [38] that considers feature fusion with MMD and designs a residual function to perform classifier adaptation. Further, DRCN in [24] utilizes residual correction block to explicitly mitigate the domain feature gap. Apart from MMD, Zhang et al [22] define a new divergence, Margin Disparity Discrepancy (MDD), and validate that it has rigorous generalization bounds.…”
Section: Discrepancy Metric Minimizationmentioning
confidence: 99%
“…Unlike previous works [24], [38], we adapt all taskspecific layers, and more importantly, explicitly measure domain discrepancy from a structural aspect. To be specific, we design a feature adaptation module and plug it into all higher task-specific layers, including the classification layer.…”
Section: Domain-general Feature Learningmentioning
confidence: 99%
“…Since the machine is in a healthy working state most of the time, the test data may contain only a few types of fault data. That is, the distribution of two domains is different and the label space of target domain is a subset of that of the source domain [ 20 , 21 , 22 ]. As many different health types as possible can be involved by training data through a long period of data accumulation, while it is difficult to guarantee the symmetry of health types in testing data and training data.…”
Section: Introductionmentioning
confidence: 99%
“…Deep domain adaptation methods attempt to improve the learning of domain-invariant feature representations by em- * Corresponding Author bedding distribution matching modules into the network architectures. So far, a variety of distribution similarity measures have been incorporated into the network architecture to learn transferable representations [Long et al, 2017]. Meanwhile, a collection of adversarial learning based domain adaptation methods have been proposed, which aligns domain distributions via minimizing an approximate discrepancy in an adversarial training setting [Ganin and Lempitsky, 2014].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, in real-world applications, it is often nontrivial to find source domains with the same label spaces as target domains of interest. Instead, due to the availability of large-scale labelled datasets, such as ImageNet [Russakovsky et al, 2015], a more practical yet more challenging scenario is referred to as partial domain adaptation (PDA), which relaxes the constraint of shared label spaces. It enables knowledge transfer from source domains with more classes to target domains with fewer classes without any knowledge on the size and categories of the target classes.…”
Section: Introductionmentioning
confidence: 99%