Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1041
|View full text |Cite
|
Sign up to set email alerts
|

Joint Event and Temporal Relation Extraction with Shared Representations and Structured Prediction

Abstract: We propose a joint event and temporal relation extraction model with shared representation learning and structured prediction. The proposed method has two advantages over existing work. First, it improves event representation by allowing the event and relation modules to share the same contextualized embeddings and neural representation learner. Second, it avoids error propagation in the conventional pipeline systems by leveraging structured inference and learning methods to assign both the event labels and th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
87
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 88 publications
(89 citation statements)
references
References 29 publications
1
87
0
1
Order By: Relevance
“…When the need arises to go beyond sentencelevel, some works combine the output scores of independently trained classifiers using inference (Beltagy et al, 2014;? ;Liu et al, 2016;Subramanian et al, 2017;Ning et al, 2018), whereas others implement joint learning for their specific domains (Niculae et al, 2017;Han et al, 2019). Our main differentiating factor is that we provide a general interface that leverages first order logic clauses to specify factor graphs and express constraints.…”
Section: Deep Classifiers and Probabilistic Inferencementioning
confidence: 99%
See 2 more Smart Citations
“…When the need arises to go beyond sentencelevel, some works combine the output scores of independently trained classifiers using inference (Beltagy et al, 2014;? ;Liu et al, 2016;Subramanian et al, 2017;Ning et al, 2018), whereas others implement joint learning for their specific domains (Niculae et al, 2017;Han et al, 2019). Our main differentiating factor is that we provide a general interface that leverages first order logic clauses to specify factor graphs and express constraints.…”
Section: Deep Classifiers and Probabilistic Inferencementioning
confidence: 99%
“…Local vs. Global Learning: The trade-off between local and global learning has been explored for graphical models (MEMM vs. CRF), and for deep structured prediction (Chen and Manning, 2014;Andor et al, 2016;Han et al, 2019). Although local learning is faster, the learned scoring functions might not be consistent with the correct global prediction.…”
Section: Modeling Strategiesmentioning
confidence: 99%
See 1 more Smart Citation
“…在文本关系建模方面, Schuster 等 [50] 对文本进行图结构建模, 挖掘文本的关 系, 提高对图像的检索性能. Han 等 [51] 不仅利用图结构对文本进行建模, 还考虑了文本中事件发生的 先后顺序. 本文借鉴了场景图这一结构来表示视觉与文本关系.…”
Section: 视觉关系或文本关系建模unclassified
“…Yoshikawa et al (2009); Ning et al (2017); Leeuwenberg and Moens (2017) explore structured learning for this task, and more recently, neural methods have also been shown effective (Tourille et al, 2017;Cheng and Miyao, 2017;Meng et al, 2017;Meng and Rumshisky, 2018). Ning et al (2018c) and Han et al (2019b) are the most recent work leveraging neural network and pre-trained language models to build an end-to-end system. Our work differs from these prior work in that we build a structured neural model with distributional constraints that combines both the benefits of both deep learning and domain knowledge.…”
Section: Related Workmentioning
confidence: 99%