Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2021
DOI: 10.18653/v1/2021.naacl-main.67
|View full text |Cite
|
Sign up to set email alerts
|

Neural Language Modeling for Contextualized Temporal Graph Generation

Abstract: This paper presents the first study on using large-scale pre-trained language models for automated generation of an event-level temporal graph for a document. Despite the huge success of neural pre-training methods in NLP tasks, its potential for temporal reasoning over event graphs has not been sufficiently explored. Part of the reason is the difficulty in obtaining large training corpora with humanannotated events and temporal links. We address this challenge by using existing IE/NLP tools to automatically g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 33 publications
0
12
0
Order By: Relevance
“…Representative works on graph generation from language models include knowledge graph completion models like Comet Hwang et al, 2021) that fine-tune GPT (Radford et al, 2019;Brown et al, 2020) and BART (Lewis et al, 2020), generation of event influence graphs (Tandon et al, 2019;Madaan et al, 2020), partially ordered scripts (Sakaguchi et al, 2021), temporal graphs (Madaan and Yang, 2021), entailment trees , proof graphs (Saha et al, 2020;Saha et al, 2021a) and commonsense explanation graphs (Saha et al, 2021b). Linguistic tasks like syntactic parsing Mohammadshahi and Henderson, 2021;Kondratyuk and Straka, 2019) and semantic parsing (Chen et al, 2020b;Shin et al, 2021) have also made use of language models.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Representative works on graph generation from language models include knowledge graph completion models like Comet Hwang et al, 2021) that fine-tune GPT (Radford et al, 2019;Brown et al, 2020) and BART (Lewis et al, 2020), generation of event influence graphs (Tandon et al, 2019;Madaan et al, 2020), partially ordered scripts (Sakaguchi et al, 2021), temporal graphs (Madaan and Yang, 2021), entailment trees , proof graphs (Saha et al, 2020;Saha et al, 2021a) and commonsense explanation graphs (Saha et al, 2021b). Linguistic tasks like syntactic parsing Mohammadshahi and Henderson, 2021;Kondratyuk and Straka, 2019) and semantic parsing (Chen et al, 2020b;Shin et al, 2021) have also made use of language models.…”
Section: Related Workmentioning
confidence: 99%
“…We test the generalizability of constructing structurally and semantically perturbed graphs for contrastive learning by also experimenting on a temporal graph generation task (Madaan and Yang, 2021) that requires constructing a temporal graph from a document. The nodes in the graph are events from the document and the edges are temporal relations between events ("before", "after", etc).…”
Section: Generalization To Other Graph Generation Tasksmentioning
confidence: 99%
See 2 more Smart Citations
“…steps describing how viruses spread) and U i and H i are nodes of G i (these are phrases as shown in Figure 2). The output seq i op is set to a DOT-string representation of the corresponding influence graph G i , as such a representation was shown to be effective at extracting high-quality graphs (Madaan and Yang, 2021) from free-form text using language models (examples in the appendix). Thus, each passage-graph pair (T i , G i ) from WIQA is mapped to an input-output pair D = (seq i ip , seq i op ).…”
Section: Influence Graphs Generationmentioning
confidence: 99%