2022
DOI: 10.1007/s11222-022-10097-z
|View full text |Cite
|
Sign up to set email alerts
|

Distributional anchor regression

Abstract: Prediction models often fail if train and test data do not stem from the same distribution. Out-of-distribution (OOD) generalization to unseen, perturbed test data is a desirable but difficult-to-achieve property for prediction models and in general requires strong assumptions on the data generating process (DGP). In a causally inspired perspective on OOD generalization, the test data arise from a specific class of interventions on exogenous random variables of the DGP, called anchors. Anchor regression models… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 38 publications
0
4
0
Order By: Relevance
“…However, if it is not known, cross-validation is recommended albeit no proof was provided regarding its adequacy for the proposed loss. Follow up work extended the method to noisy instrumental variables [Oberst et al, 2021], and discrete and censored outcomes [Kook et al, 2021].…”
Section: Introductionmentioning
confidence: 99%
“…However, if it is not known, cross-validation is recommended albeit no proof was provided regarding its adequacy for the proposed loss. Follow up work extended the method to noisy instrumental variables [Oberst et al, 2021], and discrete and censored outcomes [Kook et al, 2021].…”
Section: Introductionmentioning
confidence: 99%
“…The idea of causal regularization for enhancing stability and better external validity has been extended to a certain class of distributional regression models (Kook et al, 2022 ). The file “causal‐regularization‐supplement” in the repository (see end of Section 6), features the application of causal‐regularized distributional regression to the OULAD data set to demonstrate improved worst‐case prediction and better external validity.…”
Section: Distributional Regression and Causal Regularizationmentioning
confidence: 99%
“…In some settings, such shifts may not represent plausible changes, as we demonstrate in in Appendix D, where (in a simplified lab-testing example) the worst-case subpopulation is one where healthy patients are always tested, and sick patients never tested. Prior work on robustness to parametric interventions has been restricted to linear causal models with additive shift interventions [Rothenhäusler et al, 2021, Oberst et al, 2021, Kook et al, 2022. Our work can be seen as extending those ideas to general non-linear causal models, where our focus is on evaluation rather than learning robust models.…”
Section: Introductionmentioning
confidence: 99%