Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 2: Short Papers) 2017
DOI: 10.18653/v1/p17-2001
|View full text |Cite
|
Sign up to set email alerts
|

Classifying Temporal Relations by Bidirectional LSTM over Dependency Paths

Abstract: Temporal relation classification is becoming an active research field. Lots of methods have been proposed, while most of them focus on extracting features from external resources. Less attention has been paid to a significant advance in a closely related task: relation extraction. In this work, we borrow a state-of-the-art method in relation extraction by adopting bidirectional long short-term memory (Bi-LSTM) along dependency paths (DP). We make a "common root" assumption to extend DP representations of cross… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 93 publications
(73 citation statements)
references
References 11 publications
0
67
0
Order By: Relevance
“…annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts (Bethard et al, 2007;Ning et al, 2017), which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task Ning et al, 2017;Cheng and Miyao, 2017;Meng and Rumshisky, 2018). Recent data construction efforts such as MATRES (Ning et al, 2018a) further enhance the data quality by using a multi-axis annotation scheme and adopting a startpoint of events to improve inter-annotator agreements.…”
Section: Temporal Relation Datamentioning
confidence: 99%
“…annotated temporal relation corpora with all events and relations fully annotated is reported to be a challenging task as annotators could easily overlook some facts (Bethard et al, 2007;Ning et al, 2017), which made both modeling and evaluation extremely difficult in previous event temporal relation research. The TB-Dense dataset mitigates this issue by forcing annotators to examine all pairs of events within the same or neighboring sentences, and it has been widely evaluated on this task Ning et al, 2017;Cheng and Miyao, 2017;Meng and Rumshisky, 2018). Recent data construction efforts such as MATRES (Ning et al, 2018a) further enhance the data quality by using a multi-axis annotation scheme and adopting a startpoint of events to improve inter-annotator agreements.…”
Section: Temporal Relation Datamentioning
confidence: 99%
“…We improve upon the model architecture proposed by Cheng and Miyao (2017) for temporal relation extraction, which involves classifying the temporal relation between a given pair of events e 1 and e 2 . Our proposed architecture is outlined in Figure 1.…”
Section: Methodsmentioning
confidence: 99%
“…The parameters are shared between these lower biL-STMs for the two sentences. Prior work (Cheng and Miyao, 2017) does not include these lower biLSTMs and only leverages the dependency encoding, explained next.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Since then, much research focused on further improving the pairwise classification models, by exploring different types of classifiers and features, such as (among others) logistic regression and support vector machines (Bethard, 2013;Lin et al, 2015), and different types of neural network models, such as long short-term memory networks (LSTM) (Tourille et al, 2017;Cheng and Miyao, 2017), and convolutional neural networks (CNN) (Dligach et al, 2017). Moreover, different sievebased approaches were proposed Mirza and Tonelli, 2016), facilitating mixing of rule-based and machine learning components.…”
Section: Temporal Information Extractionmentioning
confidence: 99%