Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482315
|View full text |Cite
|
Sign up to set email alerts
|

AdaRNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 100 publications
(42 citation statements)
references
References 42 publications
0
18
0
Order By: Relevance
“…Arik et al [2] proposes Self-Adaptive Forecasting to adapt the forecasting model using test-time training by "backcasting" and using the backcast errors to signal a potential distribution shift to adjust the weights of the model before inference. AdaRNN [7] splits the time series history into dissimilar segments and learns importance weights to combine the RNN hidden states over these segments. Another notion of distribution shift, adversarial attacks, has been recently considered to build more robust forecasting [35,21].…”
Section: A Appendixmentioning
confidence: 99%
“…Arik et al [2] proposes Self-Adaptive Forecasting to adapt the forecasting model using test-time training by "backcasting" and using the backcast errors to signal a potential distribution shift to adjust the weights of the model before inference. AdaRNN [7] splits the time series history into dissimilar segments and learns importance weights to combine the RNN hidden states over these segments. Another notion of distribution shift, adversarial attacks, has been recently considered to build more robust forecasting [35,21].…”
Section: A Appendixmentioning
confidence: 99%
“…We use four widely-used evaluation metrics: IC [19], ICIR [4], Rank IC [18], and Rank ICIR [28]. At each date š‘”, š¼š¶ (š‘” ) could be measured by…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…(1) JT (joint training): a naive baseline that train on all the available data ever seen š· 1,š‘‡ . (2) Adaptive RNN (AdaRNN) [5]: a two-stage method that follow a 'segment-adapt' principle for the evolving shifted data. (3) Meta-learning via online change point analysis (MOCA) [7]: an approach which augments a meta-learning algorithm with a differentiable Bayesian change point detection scheme.…”
Section: Experiments 31 Experimental Setupmentioning
confidence: 99%