Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.436
|View full text |Cite
|
Sign up to set email alerts
|

Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events

Abstract: In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and multi-task learning (by leveraging complementary data… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(17 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…Table 3 compares our work to the baseline methods reported on the TDDMan, TDDAuto, MATRES, and TimeBank-Dense datasets. We also include results for BERT-based Transformer (Devlin et al, 2019) and RoBERTa (Liu et al, 2019) following Ballesteros et al (2020). To prevent truncation or memory errors otherwise caused by multi-sentence spans, we concatenate only sentences containing source and events as input to Transformer baselines.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 3 compares our work to the baseline methods reported on the TDDMan, TDDAuto, MATRES, and TimeBank-Dense datasets. We also include results for BERT-based Transformer (Devlin et al, 2019) and RoBERTa (Liu et al, 2019) following Ballesteros et al (2020). To prevent truncation or memory errors otherwise caused by multi-sentence spans, we concatenate only sentences containing source and events as input to Transformer baselines.…”
Section: Resultsmentioning
confidence: 99%
“…Prior work focuses on extracting temporal relations between event pairs (a.k.a., TLINKS) present in the same sentence (Intra-sentence TLINKS) or adjacent sentences (Inter-sentence TLINKS), mostly ignoring document-level pairs (Crossdocument TLINKS) (Reimers et al, 2016). Past works have used RNN (Cheng and Miyao, 2017;Meng et al, 2017;Goyal and Durrett, 2019;Ning et al, 2019;Han et al, 2019aHan et al, ,c,b, 2020b and Transformer networks (Ballesteros et al, 2020;Zhao et al, 2020b) for encoding a few sentences or a short paragraph but do not capture longrange dependencies and multi-hop reasoning at the document-level. This shortcoming is shown in the TDDiscourse dataset (Naik et al, 2019), which was designed to highlight global discourse-level challenges, e.g., multi-hop chain reasoning, future or hypothetical events, and reasoning requiring world knowledge.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…We note that the problem of temporal graph extraction is different from the more popular task of Temporal relation extraction (Temprel), which deals with classifying the temporal link between two already extracted events. State of the art Temprel systems use neural methods (Ballesteros et al, 2020;Ning et al, 2019b;Goyal and Durrett, 2019;Han et al, 2019;Cheng and Miyao, 2017), but typically use a handful of documents for their development and evaluation. Vashishtha et al (2019) are a notable exception by using Amazon Mechanical Turks to obtain manual annotations over a larger dataset of 16,000 sentences.…”
Section: Temporal Relation Extractionmentioning
confidence: 99%
“…The tasks are binary or multiple classification problems. Note the dataset of MATRES is split at the article level as in the previous work[15].2. MATRES[18] is a pairwise event temporal ordering prediction dataset, where each event pair in one document is annotated with a temporal relation (Before, After, Equal, Vague).…”
mentioning
confidence: 99%