2022
DOI: 10.48550/arxiv.2206.04046
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Sparse Fusion Mixture-of-Experts are Domain Generalizable Learners

Abstract: Domain generalization (DG) aims at learning generalizable models under distribution shifts to avoid redundantly overfitting massive training data. Previous works with complex loss design and gradient constraint have not yet led to empirical success on large-scale benchmarks. In this work, we reveal the mixture-of-experts (MoE) model's generalizability on DG by leveraging to distributively handle multiple aspects of the predictive features across domains. To this end, we propose Sparse Fusion Mixture-of-Experts… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 42 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?