Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1050
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Event Knowledge Acquisition via Identifying Narratives

Abstract: Inspired by the double temporality characteristic of narrative texts, we propose a novel approach for acquiring rich temporal "before/after" event knowledge across sentences in narrative stories. The double temporality states that a narrative story often describes a sequence of events following the chronological order and therefore, the temporal order of events matches with their textual order. We explored narratology principles and built a weakly supervised approach that identifies 287k narrative paragraphs f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 26 publications
(37 reference statements)
0
10
0
Order By: Relevance
“…One might argue that the reason the sequence perturbations work better in terms of INC accuracy is that the events extracted from news do not necessarily follow the temporal order and therefore the perturbations will not create an issue. To show the effectiveness of our approach, we evaluated the performance of our models on the event sequences extracted from narratives coming from different domains: novels, blogs and news (Yao and Huang, 2018).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One might argue that the reason the sequence perturbations work better in terms of INC accuracy is that the events extracted from news do not necessarily follow the temporal order and therefore the perturbations will not create an issue. To show the effectiveness of our approach, we evaluated the performance of our models on the event sequences extracted from narratives coming from different domains: novels, blogs and news (Yao and Huang, 2018).…”
Section: Discussionmentioning
confidence: 99%
“…By considering the next event based on shuffled sequences of events, we encourage the model to treat the input more as a set of events rather than strictly as a discourse sequence. Surprisingly, despite our disruption of discourse order, experiments show how perturbations can improve event language modeling of text, particularly when evaluating the model on other domains which present events in different orders (e.g., novels or blogs present data in more of a "narrative" fashion than news datasets common in NLP (Yao and Huang, 2018)). Our experiments evaluate accuracy on the Inverse Narrative Cloze task on in-domain newswire, as well as out-domain novels and blogs 1 .…”
Section: Introductionmentioning
confidence: 93%
“…For event knowledge, we considered three major event relation types including temporal, causal and subevent. We obtained event temporal knowledge from a previous work (Yao and Huang, 2018) 10 and we retrieved the latter two types of event knowledge from Concept-Net 11 (Speer and Havasi, 2012), which is a widelyused commonsense knowledge base.…”
Section: Dataset and Preprocessingmentioning
confidence: 99%
“…Gathering large-scale high-quality labeled data with temporal annotations is often expensive and requires specially designed annotation schemes (Pustejovsky et al, 2003a;Cassidy et al, 2014;Ning et al, 2018b;Zhao et al, 2021). Here, we instead turn to a narrative documents corpus, EventsNarratives (Yao and Huang, 2018) and design an automatic method to extract the training data we need. In these documents, discourse order is loosely assumed to reflect temporal order, so events extracted from this text can directly provide training data for our models.…”
Section: Introductionmentioning
confidence: 99%