2021
DOI: 10.48550/arxiv.2110.14248
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…Given this interesting setting and the promises of domain generalization in studying machine learning robustness, the community has developed a torrent of methods. Most of the existing methods fall into two categories: one is to build explicit regularization that pushes a model to learn representations that are invariant to the "style" across these domains [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]; the other one is to perform data augmentation that can introduce more diverse data to enrich the data of certain "semantic" information with the "style" from other domains [73][74][75][76][77][78][79][80], and also aims to train a model that is invariant to these "styles". More recently, there has been a line of approaches that aims to distill knowledge from pre-trained models into a smaller model to improve generalization performance [81][82][83][84][85].…”
Section: Domain Generalizationmentioning
confidence: 99%
“…Given this interesting setting and the promises of domain generalization in studying machine learning robustness, the community has developed a torrent of methods. Most of the existing methods fall into two categories: one is to build explicit regularization that pushes a model to learn representations that are invariant to the "style" across these domains [54][55][56][57][58][59][60][61][62][63][64][65][66][67][68][69][70][71][72]; the other one is to perform data augmentation that can introduce more diverse data to enrich the data of certain "semantic" information with the "style" from other domains [73][74][75][76][77][78][79][80], and also aims to train a model that is invariant to these "styles". More recently, there has been a line of approaches that aims to distill knowledge from pre-trained models into a smaller model to improve generalization performance [81][82][83][84][85].…”
Section: Domain Generalizationmentioning
confidence: 99%
“…It extends the setup of domain adaptation to a setting in which the testing distribution data, even unlabelled, is not available during training. Instead, models are trained with data from multiple training distributions, and enforcing invariance across these training distributions has become a major theme [1,10,19,22,42,47,52,54,69].…”
Section: Domain Adaptation Domain Generalization and New Paradigmsmentioning
confidence: 99%