Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) 2019
DOI: 10.18653/v1/k19-1062
|View full text |Cite
|
Sign up to set email alerts
|

Deep Structured Neural Network for Event Temporal Relation Extraction

Abstract: We propose a novel deep structured learning framework for event temporal relation extraction. The model consists of 1) a recurrent neural network (RNN) to learn scoring functions for pair-wise relations, and 2) a structured support vector machine (SSVM) to make joint predictions. The neural network automatically learns representations that account for long-term contexts to provide robust features for the structured model, while the SSVM incorporates domain knowledge such as transitive closure of temporal relat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 42 publications
(24 citation statements)
references
References 39 publications
0
24
0
Order By: Relevance
“…We note that the problem of temporal graph extraction is different from the more popular task of Temporal relation extraction (Temprel), which deals with classifying the temporal link between two already extracted events. State of the art Temprel systems use neural methods (Ballesteros et al, 2020;Ning et al, 2019b;Goyal and Durrett, 2019;Han et al, 2019;Cheng and Miyao, 2017), but typically use a handful of documents for their development and evaluation. Vashishtha et al (2019) are a notable exception by using Amazon Mechanical Turks to obtain manual annotations over a larger dataset of 16,000 sentences.…”
Section: Temporal Relation Extractionmentioning
confidence: 99%
“…We note that the problem of temporal graph extraction is different from the more popular task of Temporal relation extraction (Temprel), which deals with classifying the temporal link between two already extracted events. State of the art Temprel systems use neural methods (Ballesteros et al, 2020;Ning et al, 2019b;Goyal and Durrett, 2019;Han et al, 2019;Cheng and Miyao, 2017), but typically use a handful of documents for their development and evaluation. Vashishtha et al (2019) are a notable exception by using Amazon Mechanical Turks to obtain manual annotations over a larger dataset of 16,000 sentences.…”
Section: Temporal Relation Extractionmentioning
confidence: 99%
“…Feature Encoder. Input instances are first sent to pre-trained language models such as BERT (Devlin et al, 2018) and RoBERTa (Liu et al, 2019), then to a Bi-LSTM layer as in previous event temporal relation extraction work (Han et al, 2019a). Encoded features will be used as inputs to the event extractor and the relation module below.…”
Section: End-to-end Event Relation Extractionmentioning
confidence: 99%
“…Natural language supports various forms of temporal reasoning, including reasoning about the chronology and duration of events, and many Natural Language Understanding (NLU) tasks and models have been employed for understanding and capturing different aspects of temporal reasoning (UzZaman et al, 2013;Llorens et al, 2015;Mostafazadeh et al, 2016;Reimers et al, 2016;Tourille et al, 2017;Ning et al, 2017Ning et al, , 2018aMeng and Rumshisky, 2018;Ning et al, 2018b;Han et al, 2019;Naik et al, 2019;Vashishtha et al, 2019;Zhou et al, 2019Zhou et al, , 2020. More broadly, the ability to perform temporal reasoning is important for understanding narratives (Nakhimovsky, 1987;Jung et al, 2011;Cheng et al, 2013), answering questions (Bruce, 1972;Khashabi, 2019;, and summarizing events (Jung et al, 2011;Wang et al, 2018).…”
Section: Introductionmentioning
confidence: 99%