Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.117
|View full text |Cite
|
Sign up to set email alerts
|

Retrieval-guided Counterfactual Generation for QA

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Counterfactual Inference. Our work is based on counterfactual inference (Pearl et al, 2000), which has shown promising results in various NLP tasks, including question answering (Paranjape et al, 2022;, machine translation and story generation (Qin et al, 2019;. In particular, uses counterfactual inference for response generation, which explores potential responses via counterfactual off-policy training.…”
Section: Further Discussionmentioning
confidence: 99%
“…Counterfactual Inference. Our work is based on counterfactual inference (Pearl et al, 2000), which has shown promising results in various NLP tasks, including question answering (Paranjape et al, 2022;, machine translation and story generation (Qin et al, 2019;. In particular, uses counterfactual inference for response generation, which explores potential responses via counterfactual off-policy training.…”
Section: Further Discussionmentioning
confidence: 99%
“…In NLP, early approaches (Kaushik et al, 2020) pair each instance with its label-flipped augmentation, obtained from perturbing labels by human efforts with ϵ lexical changes. Later approaches propose automatic syntheses to alleviate human efforts (Han et al, 2021;Paranjape et al, 2022;Calderon et al, 2022).…”
Section: A4 Counterfactual Text Augmentationmentioning
confidence: 99%
“…However, Joshi and He (2022) find that a limited set of perturbation types further exacerbates biases, resulting in poor generalization to unseen perturbation types. Generally, creating an assorted set of instance-specific perturbations is challenging, often requiring external knowledge (Paranjape et al, 2022). Retrieval Augmented Generation Retrieving task-relevant knowledge from a large corpus of unstructured and unlabeled text has proven to be very effective for knowledge-intensive language generation tasks like question answering , machine translation (Gu et al, 2018) and dialogue generation (Weston et al, 2018).…”
Section: Related Workmentioning
confidence: 99%
“…In a similar vein, CORE uses learned retrieval for counterfactual generation. While Paranjape et al (2022) use off-the-shelf retrieval models to generate counterfactuals for QA, learning to retrieve counterfactuals is non-trivial for problems other than QA. CORE provides a recipe to train retrieval for general tasks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation