Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1112
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Role Labeling with Neural Network Factors

Abstract: We present a new method for semantic role labeling in which arguments and semantic roles are jointly embedded in a shared vector space for a given predicate. These embeddings belong to a neural network, whose output represents the potential functions of a graphical model designed for the SRL task. We consider both local and structured learning methods and obtain strong results on standard PropBank and FrameNet corpora with a straightforward product-of-experts model. We further show how the model can learn join… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
107
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(110 citation statements)
references
References 30 publications
0
107
0
Order By: Relevance
“…) is a graphical model with global factors ; FitzGerald (Struct. ) is an improved version of the graphical model with non-linear potential functions instead of linear ones; FitzGerald (Struct., PoE) further employs an ensemble with the product-of-experts (PoE) (Hinton, 2002);and FitzGerald (Local, PoE, Joint) indicates the best reported results in FitzGerald et al (2015) which uses local factors and additional training data from CoNLL 2005. We can see that our sequential model alone is already close to the state of the art.…”
Section: Framenet Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…) is a graphical model with global factors ; FitzGerald (Struct. ) is an improved version of the graphical model with non-linear potential functions instead of linear ones; FitzGerald (Struct., PoE) further employs an ensemble with the product-of-experts (PoE) (Hinton, 2002);and FitzGerald (Local, PoE, Joint) indicates the best reported results in FitzGerald et al (2015) which uses local factors and additional training data from CoNLL 2005. We can see that our sequential model alone is already close to the state of the art.…”
Section: Framenet Resultsmentioning
confidence: 99%
“…Table 5 shows the results of our SRL models on the CoNLL 2005 data. Our baselines include the best feature-based systems of Surdeanu et al (2007), Toutanova et al (2008), and Punyakanok et al (2008), the recurrent neural network model (DB-LSTM) (Zhou and Xu, 2015), the graphical model with global factors and the improved versions that use neural network factors (FitzGerald et al, 2015). Note that our sequential model in this setting is essentially the same as the DB-LSTM model (Zhou and Xu, 2015) since all the frame-specific constraints are removed, except that we use simpler input features.…”
Section: Framenet Resultsmentioning
confidence: 99%
“…Additionally, a number of systems exploit syntactic information (Roth and Lapata, 2016;. Some of the results are obtained with ensemble systems (FitzGerald et al, 2015;Roth and Lapata, 2016). As we observe, the baseline model (see Section 2) is quite strong, and outperforms the previous state-of-the-art systems on both in-domain and out-of-domain sets on English.…”
Section: Overall Resultsmentioning
confidence: 82%
“…Although many studies confirmed that embeddings obtained from distributional similarity can be useful in a variety of different tasks, (Levy et al, 2015) showed that the semantic knowledge encoded by general-purpose similarity embeddings is limited, and that enforcing the learnt representations to distinguish functional similarity from relatedness is beneficial. For this purpose, many task-specific embeddings have been proposed for a variety of tasks including (Riedel et al, 2013) for binary relation extraction and (FitzGerald et al, 2015) for semantic role labeling. This work aims to preserve more far reaching structures, namely analogies between pairs of entities.…”
Section: Related Workmentioning
confidence: 99%