Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1433
|View full text |Cite
|
Sign up to set email alerts
|

Embedding Time Expressions for Deep Temporal Ordering Models

Abstract: Data-driven models have demonstrated stateof-the-art performance in inferring the temporal ordering of events in text. However, these models often overlook explicit temporal signals, such as dates and time windows. Rule-based methods can be used to identify the temporal links between these time expressions (timexes), but they fail to capture timexes' interactions with events and are hard to integrate with the distributed representations of neural net models. In this paper, we introduce a framework to infuse te… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 19 publications
(22 citation statements)
references
References 18 publications
(26 reference statements)
0
20
0
Order By: Relevance
“…The MATRES dataset is our primary dataset for training and validation. As in previous work, we use TimeBank and AQUAINT (256 articles) for training, 25 articles of which are selected at random for validation and Platinum (20 articles) as a held-out test set (Ning et al, 2018;Goyal and Durrett, 2019;Ning et al, 2019). Articles from TimeBank and AQUAINT at full length are about 400 tokens long on average.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The MATRES dataset is our primary dataset for training and validation. As in previous work, we use TimeBank and AQUAINT (256 articles) for training, 25 articles of which are selected at random for validation and Platinum (20 articles) as a held-out test set (Ning et al, 2018;Goyal and Durrett, 2019;Ning et al, 2019). Articles from TimeBank and AQUAINT at full length are about 400 tokens long on average.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The sigmoid and exponent schedulers perform better than the constant scheduler, suggesting that the model needs to first learn about temporality, and then learn to be more specialized on predicting temporal ordering relations later. We believe this timex multi-tasking setup to be an implicit yet effective way to teach our model about timexes in general without timex embeddings used in (Goyal and Durrett, 2019). When we use the ACE relation extraction dataset as an auxiliary task, none of the schedulers produce improvements while the sigmoid and exponent scheduler fare significantly worse.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 3 more Smart Citations