Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.370
|View full text |Cite
|
Sign up to set email alerts
|

GLUCOSE: GeneraLized and COntextualized Story Explanations

Abstract: Current affiliation Verneek, Inc.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
61
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 62 publications
(72 citation statements)
references
References 33 publications
(31 reference statements)
0
61
0
Order By: Relevance
“…To address the insufficient annotated example problem, we employ a large number of external causal statements (Sap et al, 2018;Mostafazadeh et al, 2020) that can support adequate evidence of context-specific causal patterns for understanding event causalities. For example in Figure 1, the context-specific causal pattern support by an external causal statement S 2 is helpful for identifying the causality of event noticed E1 and event alerted E3 in S 1 , which is unseen when only training with labeled data.…”
Section: Billy Finds His Childhood Teddy Bear >Causes/enables>mentioning
confidence: 99%
“…To address the insufficient annotated example problem, we employ a large number of external causal statements (Sap et al, 2018;Mostafazadeh et al, 2020) that can support adequate evidence of context-specific causal patterns for understanding event causalities. For example in Figure 1, the context-specific causal pattern support by an external causal statement S 2 is helpful for identifying the causality of event noticed E1 and event alerted E3 in S 1 , which is unseen when only training with labeled data.…”
Section: Billy Finds His Childhood Teddy Bear >Causes/enables>mentioning
confidence: 99%
“…Recently, large pretrained language models (LMs) such as GPT-2 have shown remarkable performance on various generation tasks. While these pretrained LMs learn probabilistic associations between words and sentences, they still have difficulties in modeling causality (Mostafazadeh et al, 2020). Also, in narrative story generation, models need to be consistent with everyday commonsense norms.…”
Section: Implicit Inference Rules Effectmentioning
confidence: 99%
“…However, the generated knowledge from COMET is noncontextualized and hence, can be inconsistent. Recently, Mostafazadeh et al (2020) proposed GLU-COSE, a new resource and dataset that offers semistructured commonsense inference rules that are grounded in sentences of specific stories. They show that fine-tuning a pre-trained LM on the GLUCOSE dataset helps the model to better generate inferrable commonsense explanations given a complete story.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…ATOMIC is person centric, hence it can not be used in sentences describing events. Mostafazadeh et al (2020) constructs GLUCOSE (GeneraLized and COntextualized Story Explanations), a large-scale dataset of implicit commonsense causal knowledge, which sentences can describe any event/state. Each GLUCOSE entry is organized into a story-specific causal statement paired with an inference rule generalized from the statement.…”
Section: Benchmarksmentioning
confidence: 99%