2017
DOI: 10.1609/aaai.v31i1.11001
|View full text |Cite
|
Sign up to set email alerts
|

What Happens Next? Future Subevent Prediction Using Contextual Hierarchical LSTM

Abstract: Events are typically composed of a sequence of subevents. Predicting a future subevent of an event is of great importance for many real-world applications. Most previous work on event prediction relied on hand-crafted features and can only predict events that already exist in the training data. In this paper, we develop an end-to-end model which directly takes the texts describing previous subevents as input and automatically generates a short text describing a possible future subevent. Our model captures the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(6 citation statements)
references
References 22 publications
0
6
0
Order By: Relevance
“…When these datasets are utilized for forecasting, they are usually represented as time series approach [22] [23], with each data point having a timestamp. An alternative approach is script-learning, in which a model is asked to predict the association between a series of events and a "future" event after being given the sequence and the "future" event [24] [25]. They demand that questions and response options be translated into their format and that text data be converted into event triples, which lessens the expressiveness of natural text.…”
Section: Forecastingmentioning
confidence: 99%
“…When these datasets are utilized for forecasting, they are usually represented as time series approach [22] [23], with each data point having a timestamp. An alternative approach is script-learning, in which a model is asked to predict the association between a series of events and a "future" event after being given the sequence and the "future" event [24] [25]. They demand that questions and response options be translated into their format and that text data be converted into event triples, which lessens the expressiveness of natural text.…”
Section: Forecastingmentioning
confidence: 99%
“…These three works all model narrative chains, that is, event sequences in which a single entity (the protagonist) participates in every event. Hu et al (2017) also apply a RNN approach, applying a new hierarchical LSTM model in order predict events by generating discriptive word sequences.…”
Section: Related Workmentioning
confidence: 99%
“…As illustrated in Figure 1, due to the overlap of words, parameterized additive models (Granroth-Wilding and Clark 2016; Modi 2016) and RNN-based models (Pichotta and Mooney 2016;Hu et al 2017) are limited in their transformations. Additive models combine the words in these phrases by the passing the concatenation or addition of their word embeddings to a parameterized function (usually a feed forward neural network) that maps the summed vector into event embedding space.…”
Section: Introductionmentioning
confidence: 99%
“…1, these works can be categorized into three major groups, i.e., intraevent-based, interevent-based, scenario-based models. Speciically, built on some popular neural networks, intraevent-based approaches are designed to model multiplicative interactions among intraevent elements, e.g., skip-gram network [16,23] and tensor networks [46]. Interevent-based approaches explore a range of complex and diverse interevent relations between events to reine the generated representations, e.g., time-order-based [45], event-graph-based [26], event-segment-based [30] and discourse-relations-based [24] relations.…”
Section: Introductionmentioning
confidence: 99%