2021
DOI: 10.1016/j.neucom.2020.09.091
|View full text |Cite
|
Sign up to set email alerts
|

Domain generalization via optimal transport with metric similarity learning

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(30 citation statements)
references
References 17 publications
0
30
0
Order By: Relevance
“…To improve the model generalization ability over out-of-distribution test samples, some work focuses on aligning feature representations across different domains. The minimization of feature discrepancy can be conducted over various distance metrics, including second-order statistics (Sun & Saenko, 2016), maximum mean discrepancy (Tzeng et al, 2014) and Wasserstein distance (Zhou et al, 2021b), or measured by adversarial networks (Ganin et al, 2016). Others apply data augmentation to generate new samples or domains to promote the consistency of feature representations, such as Mixup across existing domains (Xu et al, 2020;Yan et al, 2020b), or in an adversarial manner (Zhao et al, 2020;Qiao et al, 2020).…”
Section: General Methods For Ood and Noisy Labelsmentioning
confidence: 99%
See 1 more Smart Citation
“…To improve the model generalization ability over out-of-distribution test samples, some work focuses on aligning feature representations across different domains. The minimization of feature discrepancy can be conducted over various distance metrics, including second-order statistics (Sun & Saenko, 2016), maximum mean discrepancy (Tzeng et al, 2014) and Wasserstein distance (Zhou et al, 2021b), or measured by adversarial networks (Ganin et al, 2016). Others apply data augmentation to generate new samples or domains to promote the consistency of feature representations, such as Mixup across existing domains (Xu et al, 2020;Yan et al, 2020b), or in an adversarial manner (Zhao et al, 2020;Qiao et al, 2020).…”
Section: General Methods For Ood and Noisy Labelsmentioning
confidence: 99%
“…To help accelerate research by focusing community attention and simplifying systematic comparisons between data collection and implementation method, we present DrugOOD, a systematic OOD dataset curator and benchmark for AI-aided drug discovery which comes with an open-source Python package that fully automates the data curation process and OOD benchmarking process. We focus on the most challenging OOD setting: domain generalization (Zhou et al, 2021b) problem in AI-aided drug discovery, though DrugOOD can be easily adapted to other OOD settings, such as subpopulation shift (Koh et al, 2021) and domain adaptation (Zhuang et al, 2020). Our dataset is also the first AIDD dataset curator with realistic noise annotations, that can serve as an important testbed for the setting of learning under noise.…”
Section: Introductionmentioning
confidence: 99%
“…Explicit feature alignment. This line of works aligns the features across source domains to learn domain-invariant representations through explicit feature distribution alignment [59,117,118], or feature normalization [119,120,121,57].…”
Section: Domain-invariant Representation-based Dgmentioning
confidence: 99%
“…Motiian et al [55] introduced a cross-domain contrastive loss for representation learning, where mapped domains are semantically aligned and yet maximally separated. Some methods explicitly minimized the feature distribution divergence by minimizing the maximum mean discrepancy (MMD) [122,116,123,124], second order correlation [125,126,127], both mean and variance (moment matching) [118], Wasserstein distance [117], etc, of domains for either domain adaptation or domain generalization. Zhou et al [117] aligned the marginal distribution of different source domains via optimal transport by minimizing the Wasserstein distance to achieve domain-invariant feature space.…”
Section: Domain-invariant Representation-based Dgmentioning
confidence: 99%
“…To address the problem of DG, in practice, one usually parameterizes the hypothesis as composed of a representation function followed by a labeling function [Albuquerque et al, 2019, Dou et al, 2019, Li et al, 2018b, Zhou et al, 2021a. This approach has its roots in the seminal works of [Ben-David et al, 2007, 2010, where the upper bounds on the risk of unseen domain were derived for the simple but instructive binary classification setting.…”
Section: Introductionmentioning
confidence: 99%