Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.58
|View full text |Cite
|
Sign up to set email alerts
|

Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning

Abstract: Abductive and counterfactual reasoning, core abilities of everyday human cognition, require reasoning about what might have happened at time t, while conditioning on multiple contexts from the relative past and future. However, simultaneous incorporation of past and future contexts using generative language models (LMs) can be challenging, as they are trained either to condition only on the past context or to perform narrowly scoped text-infilling.In this paper, we propose DELOREAN, a new unsupervised decoding… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
59
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(63 citation statements)
references
References 46 publications
0
59
0
Order By: Relevance
“…This work utilizes the technology from causality (Pearl, 2009) which is a powerful to study the cause-effect between variables and results (Pearl, 2009). Supported by this technology, recent studies recognize the spurious correlations in neural models (Feng et al, 2018;Gururangan et al, 2018), and several works introduce causal mechanism into computer vision (Tang et al, 2020;Qi et al, 2020) and natural language processing (Zeng et al, 2020;Wu et al, 2020;Qin et al, 2020;Fu et al, 2020). As far as we know, we are the first work to analyze the instability in OpenRE from the perspective of causality.…”
Section: Related Workmentioning
confidence: 99%
“…This work utilizes the technology from causality (Pearl, 2009) which is a powerful to study the cause-effect between variables and results (Pearl, 2009). Supported by this technology, recent studies recognize the spurious correlations in neural models (Feng et al, 2018;Gururangan et al, 2018), and several works introduce causal mechanism into computer vision (Tang et al, 2020;Qi et al, 2020) and natural language processing (Zeng et al, 2020;Wu et al, 2020;Qin et al, 2020;Fu et al, 2020). As far as we know, we are the first work to analyze the instability in OpenRE from the perspective of causality.…”
Section: Related Workmentioning
confidence: 99%
“…Non-monotonic generation and refinement. Another relevant line of research is non-monotonic generation Gu et al, 2019;, infilling (Zhu et al, 2019;Shen et al, 2020;Qin et al, 2020), or refinement (Lee et al, 2018;Novak et al, 2016;Mansimov et al, 2019;Kasai et al, 2020) that differs from the restricted left-toright generation in conventional LMs. Again, those approaches largely depend on specialized architectures and inference, making them difficult to be integrated with the powerful pretrained LMs.…”
Section: Related Workmentioning
confidence: 99%
“…This naturally fits RE-FLECTIVE DECODING, which fills in contextual gaps. Recent work has directly addressed this task (Qin et al, 2020) while the infilling literature is also quite applicable (Donahue et al, 2020). We compare to both of these methods on abductive infilling, showing superior results.…”
Section: Novelty In Paraphrasingmentioning
confidence: 99%