2021
DOI: 10.48550/arxiv.2108.06721
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

Abstract: In several real world applications, machine learning models are deployed to make predictions on data whose distribution changes gradually along time, leading to a drift between the train and test distributions. Such models are often re-trained on new data periodically, and they hence need to generalize to data not too far into the future. In this context, there is much prior work on enhancing temporal generalization, e.g. continuous transportation of past data, kernel smoothed time-sensitive parameters and mor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 11 publications
(22 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?