2021
DOI: 10.48550/arxiv.2112.10157
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rethinking Importance Weighting for Transfer Learning

Abstract: A key assumption in supervised learning is that training and test data follow the same probability distribution. However, this fundamental assumption is not always satisfied in practice, e.g., due to changing environments, sample selection bias, privacy concerns, or high labeling costs. Transfer learning (TL) relaxes this assumption and allows us to learn under distribution shift. Classical TL methods typically rely on importance-weighting-a predictor is trained based on the training losses weighted according … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 40 publications
0
1
0
Order By: Relevance
“…Given its importance in real-world applications, the problem of how to learn from shifting distributions has been widely studied. Much past work has focused on a single shift between training/test data (Lu et al, 2021;Wang & Deng, 2018;Fakoor et al, 2020b) as well as restricted forms of shift involving changes in only the features (Sugiyama et al, 2007a;Reddi et al, 2015a), labels (Lipton et al, 2018;Garg et al, 2020;Alexandari et al, 2020), or in the underlying relationship between the two (Zhang et al, 2013;Lu et al, 2018). Past approaches to handle distributions evolving over time have been considered in the literature on: concept drift Gomes et al ( 2019); Souza et al (2020), reinforcement learning (shift between the target policy and behavior policy) Schulman et al (2015); Wang et al (2016);Fakoor et al (2020a), (meta) online learning Shalev-Shwartz (2012); Finn et al (2019); Harrison et al (2020); Wu et al (2021), and task-free continual/incremental learning Aljundi et al (2019); He et al (2019), but to our knowledge, existing methods for these settings do not employ time-varying data weights like we propose here.…”
Section: Related Workmentioning
confidence: 99%
“…Given its importance in real-world applications, the problem of how to learn from shifting distributions has been widely studied. Much past work has focused on a single shift between training/test data (Lu et al, 2021;Wang & Deng, 2018;Fakoor et al, 2020b) as well as restricted forms of shift involving changes in only the features (Sugiyama et al, 2007a;Reddi et al, 2015a), labels (Lipton et al, 2018;Garg et al, 2020;Alexandari et al, 2020), or in the underlying relationship between the two (Zhang et al, 2013;Lu et al, 2018). Past approaches to handle distributions evolving over time have been considered in the literature on: concept drift Gomes et al ( 2019); Souza et al (2020), reinforcement learning (shift between the target policy and behavior policy) Schulman et al (2015); Wang et al (2016);Fakoor et al (2020a), (meta) online learning Shalev-Shwartz (2012); Finn et al (2019); Harrison et al (2020); Wu et al (2021), and task-free continual/incremental learning Aljundi et al (2019); He et al (2019), but to our knowledge, existing methods for these settings do not employ time-varying data weights like we propose here.…”
Section: Related Workmentioning
confidence: 99%