Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.363
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Reasoning in Natural Language Inference

Abstract: We introduce five new natural language inference (NLI) datasets focused on temporal reasoning. We recast four existing datasets annotated for event duration-how long an event lasts-and event ordering-how events are temporally arranged-into more than one million NLI examples. We use these datasets to investigate how well neural models trained on a popular NLI corpus capture these forms of temporal reasoning.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 50 publications
0
22
0
Order By: Relevance
“…Multiple systems have been proposed as part of research into temporal ordering (Do et al, 2012;Moens and Leeuwenberg, 2017;Leeuwenberg and Moens, 2018;Meng and Rumshisky, 2018;Ning et al, 2018c;Han et al, 2019), duration prediction (Vashishtha et al, 2019) and other tasks. Our decision to use a textual entailment style follows recent work on natural language inference (Williams et al, 2017;Nie et al, 2020;Bhagavatula et al, 2020), which tends to not focus on time (for recent work on temporal NLI, see Vashishtha et al (2020)). Many have used distant supervision for temporal reasoning (Gusev et al, 2011;Ning et al, 2018a;.…”
Section: Related Workmentioning
confidence: 99%
“…Multiple systems have been proposed as part of research into temporal ordering (Do et al, 2012;Moens and Leeuwenberg, 2017;Leeuwenberg and Moens, 2018;Meng and Rumshisky, 2018;Ning et al, 2018c;Han et al, 2019), duration prediction (Vashishtha et al, 2019) and other tasks. Our decision to use a textual entailment style follows recent work on natural language inference (Williams et al, 2017;Nie et al, 2020;Bhagavatula et al, 2020), which tends to not focus on time (for recent work on temporal NLI, see Vashishtha et al (2020)). Many have used distant supervision for temporal reasoning (Gusev et al, 2011;Ning et al, 2018a;.…”
Section: Related Workmentioning
confidence: 99%
“…The dataset contains over 1.1K test instances. Each dialog contains 11.7 turns and 3 temporal expressions on average, presenting richer and more complex context compared to the recent single-sentence-based temporal question answering benchmarks (e.g., Zhou et al, 2019;Vashishtha et al, 2020). As above, each test instance contains two correct answers and two incorrect ones.…”
Section: Properties Of Timedialmentioning
confidence: 99%
“…Although previous works have studied temporal reasoning in natural language, they have either focused on specific time-related concepts in isolation, such as temporal ordering and relation extraction (Leeuwenberg and Moens, 2018;Ning et al, 2018a), and/or dealt with limited context, such as single-sentence-based question answering (Zhou et al, 2019) and natural language inference (Vashishtha et al, 2020;Mostafazadeh et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…We follow recent work that test for an expanded range of inference patterns in RTE systems (Bernardy and Chatzikyriakidis, 2019) by evaluating how well RTE models capture specific linguistic phenomena, such as pragmatic inferences (Jeretic et al, 2020), veridicality , and others (Pavlick and Callison-Burch, 2016;White et al, 2017;Dasgupta et al, 2018;Naik et al, 2018;Glockner et al, 2018;Kim et al, 2019;Kober et al, 2019;Richardson et al, 2020;Yanaka et al, 2020;Vashishtha et al, 2020;Poliak, 2020).…”
Section: Related Workmentioning
confidence: 99%