Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-Level Semantics 2017
DOI: 10.18653/v1/w17-0906
|View full text |Cite
|
Sign up to set email alerts
|

LSDSem 2017 Shared Task: The Story Cloze Test

Abstract: The LSDSem'17 shared task is the Story Cloze Test, a new evaluation for story understanding and script learning. This test provides a system with a four-sentence story and two possible endings, and the system must choose the correct ending to the story. Successful narrative understanding (getting closer to human performance of 100%) requires systems to link various levels of semantics to commonsense knowledge. A total of eight systems participated in the shared task, with a variety of approaches including end-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
112
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 124 publications
(129 citation statements)
references
References 10 publications
(20 reference statements)
0
112
0
Order By: Relevance
“…We also notice the recent progress in RocStories (Mostafazadeh et al, 2017). Rather than inferring a possible ending generated from document, recent systems solve this task by discriminatively comparing two candidates.…”
Section: Related Workmentioning
confidence: 99%
“…We also notice the recent progress in RocStories (Mostafazadeh et al, 2017). Rather than inferring a possible ending generated from document, recent systems solve this task by discriminatively comparing two candidates.…”
Section: Related Workmentioning
confidence: 99%
“…Based on these vector representations, it predicts the ending-option with the largest cosine similarity with the context. Msap: The task addressed in this paper was also a shared task for an EACL'17 workshop and this baseline (Schwartz et al, 2017) represents the best performance reported on its leaderboard (Mostafazadeh et al, 2017). It trains a logistic regression based on stylistic and languagemodel based features.…”
Section: Baselinesmentioning
confidence: 99%
“…To fine tune the model on short stories, we re-train FES-LM on the ROCStories dataset (Mostafazadeh et al, 2017) with the model trained on NYT as initialization. We use the train set of ROCStories, which contains around 100K short stories (each consists of five sentences) 5 .…”
Section: Dataset and Preprocessingmentioning
confidence: 99%
“…We then re-train the model on short commonsense stories (with the model trained on news as initialization). We perform story cloze test (Mostafazadeh et al, 2017), i.e. given a four-sentence story, choose the fifth sentence from two provided options.…”
Section: Introductionmentioning
confidence: 99%