Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &Amp; Data Mining 2020
DOI: 10.1145/3394486.3403269
|View full text |Cite
|
Sign up to set email alerts
|

Stable Learning via Differentiated Variable Decorrelation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
19
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(23 citation statements)
references
References 19 publications
0
19
0
Order By: Relevance
“…However, these methods are built on simple regressions or regular neural networks such as CNNs, but GNNs have more complex architectures and properties needed to be considered. We also notice that [6] propose a differentiated variable decorrelation term for linear regression. However, this decorrelation term requires multiple environments with different correlations between stable variables and unstable variables available in the training stage while our method does not require this prior knowledge.…”
Section: Related Workmentioning
confidence: 93%
See 1 more Smart Citation
“…However, these methods are built on simple regressions or regular neural networks such as CNNs, but GNNs have more complex architectures and properties needed to be considered. We also notice that [6] propose a differentiated variable decorrelation term for linear regression. However, this decorrelation term requires multiple environments with different correlations between stable variables and unstable variables available in the training stage while our method does not require this prior knowledge.…”
Section: Related Workmentioning
confidence: 93%
“…Based on the causal view analysis of the decorrelation regularizer, we theoretically prove that the weights of variables can be differentiated by the regression coefficients. Compared with existing decorrelation methods [5], [6], the proposed regularizer is able to remove the spurious correlation while maintaining a higher effective sample size and requiring less prior knowledge. Moreover, to better combine the decorrelation regularizer with existing GNN architecture, the theoretical result shows that adding the regularizer to the embeddings learned by the penultimate layer could be both theoretically sound and flexible.…”
Section: Introductionmentioning
confidence: 99%
“…Causal Inference: Causal inference (Pearl et al, 2016;Rubin, 2019) has been applied in many areas, including visual tasks (Tang et al, 2020b;Abbasnejad et al, 2020;Niu et al, 2021;Zhang et al, 2020a;Yue et al, 2020;Nan et al, 2021b), model robustness and stable learning (Srivastava et al, 2020;Zhang et al, 2020a;Shen et al, 2020;Yu et al, 2020;Dong et al, 2020), generation , language understanding (Feng et al, 2021b), and recommendation systems (Jesson et al, 2020;Zhang et al, 2021d;Wang et al, 2021b;Ding et al, 2021). Works most related to ours are (Zeng et al, 2020;Wang and Culotta, 2021) that generates counterfactuals for weakly-supervised NER and text classifications, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…Causal Inference: Causal inference (Pearl et al, 2016;Rubin, 2019) has been applied in many areas, including visual tasks (Tang et al, 2020b;Abbasnejad et al, 2020;Niu et al, 2021;Zhang et al, 2020a;Yue et al, 2020;Nan et al, 2021b), model robustness and stable learning (Srivastava et al, 2020;Zhang et al, 2020a;Shen et al, 2020;Yu et al, 2020;Dong et al, 2020), generation , language understanding (Feng et al, 2021b), and recommendation systems (Jesson et al, 2020;Zhang et al, 2021d;Wang et al, 2021b;Ding et al, 2021). Works most related to ours are (Zeng et al, 2020;Wang and Culotta, 2021) that generates counterfactuals for weakly-supervised NER and text classifications, respectively.…”
Section: Related Workmentioning
confidence: 99%