2022
DOI: 10.1609/aaai.v36i10.21296
|View full text |Cite
|
Sign up to set email alerts
|

C2L: Causally Contrastive Learning for Robust Text Classification

Abstract: Despite the super-human accuracy of recent deep models in NLP tasks, their robustness is reportedly limited due to their reliance on spurious patterns. We thus aim to leverage contrastive learning and counterfactual augmentation for robustness. For augmentation, existing work either requires humans to add counterfactuals to the dataset or machines to automatically matches near-counterfactuals already in the dataset. Unlike existing augmentation is affected by spurious correlations, ours, by synthesizing “a set… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…Some work also introduces counterfactual and causal ideas. Among the existing counterfactual contrastive learning methods, C2l (Choi et al, 2022) is similar to ours. It synthesizes "a set" of counterfactuals and makes a collective decision on the distribution of predictions on this set, which can robustly supervise the causality of each term.…”
Section: Debiasing Strategymentioning
confidence: 83%
“…Some work also introduces counterfactual and causal ideas. Among the existing counterfactual contrastive learning methods, C2l (Choi et al, 2022) is similar to ours. It synthesizes "a set" of counterfactuals and makes a collective decision on the distribution of predictions on this set, which can robustly supervise the causality of each term.…”
Section: Debiasing Strategymentioning
confidence: 83%
“…Therefore, the human-in-the-loop process is designed to take advantage of human knowledge to modify text and obtain opposite labels for counterfactual augmentation [15]. But due to the high cost of human labor, many methods of automatic counterfactual augmentation have also been developed [14], [16], [17]. Edits against auto-mined causal features are used to obtain counterfactual samples.…”
Section: Single Word Counterfactualmentioning
confidence: 99%
“…[16] learns the rules by logical reasoning and gives faithful counterfactual predictions. C2L make a collective decision based on a set of counterfactuals to overcome shortcut learning [17]. AutoCAD guides controllable generative models to automatically generate counterfactual data [58].…”
Section: Shortcut Mitigation and Robust Model Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…This is because auto-tagging has a potentially very large label space, ranging from subject topics to knowledge components (KC) (Zhang et al, 2015;Koedinger et al, 2012;Mohania et al, 2021;Viswanathan et al, 2022). The resulting data scarcity decreases performance on rare labels during training (Chalkidis et al, 2020;Lu et al, 2020;Snell et al, 2017;Choi et al, 2022).…”
Section: Introductionmentioning
confidence: 99%