Domain Adaptation for Visual Understanding 2020
DOI: 10.1007/978-3-030-30671-7_6
|View full text |Cite
|
Sign up to set email alerts
|

On Minimum Discrepancy Estimation for Deep Domain Adaptation

Abstract: In the presence of large sets of labeled data, Deep Learning (DL) has accomplished extraordinary triumphs in the avenue of computer vision, particularly in object classification and recognition tasks. However, DL cannot always perform well when the training and testing images come from different distributions or in the presence of domain shift between training and testing images. They also suffer in the absence of labeled input data. Domain adaptation (DA) methods have been proposed to make up the poor perform… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 31 publications
(7 citation statements)
references
References 31 publications
0
7
0
Order By: Relevance
“…• High-order Maximum Mean Discrepancy (HoMM) [4]: aligns the high-order moments to effectively tackle the discrepancy between the two domains. • Minimum Discrepancy Estimation for Deep Domain Adaptation (MMDA) [27]: combines the MMD and correlation alignment with entropy minimization to effectively address the domain shift issue. • Domain-Adversarial Training of Neural Networks (DANN) [9]: leverages gradient reversal layer to adversarially train a domain discriminator network against an encoder network.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…• High-order Maximum Mean Discrepancy (HoMM) [4]: aligns the high-order moments to effectively tackle the discrepancy between the two domains. • Minimum Discrepancy Estimation for Deep Domain Adaptation (MMDA) [27]: combines the MMD and correlation alignment with entropy minimization to effectively address the domain shift issue. • Domain-Adversarial Training of Neural Networks (DANN) [9]: leverages gradient reversal layer to adversarially train a domain discriminator network against an encoder network.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…They jointly learned both instance‐based and centre‐based discriminative learning scheme for deep domain adaptation. Other work on this topic [21] presents an unsupervised deep domain adaptation method based on CORAL and MMD. This method by jointly utilising MMD and CORAL loss layers in the last two layers of source network and target network, aligns the second‐order statistics and higher‐order statistics, respectively.…”
Section: Discrepancy Based Methodsmentioning
confidence: 99%
“…To achieve high prediction certainty, the entropy minimization pushes the examples to nearby examples far from the decision boundary. The entropy could be minimized directly [ 36 , 37 ] or by an adversarial way [ 38 , 39 ]. In [ 36 ], Long et al imposed conditional entropy loss on target domain data, which ensures that the target classifier fits the target-specific structures well.…”
Section: Related Workmentioning
confidence: 99%