Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics - ACL '05 2005
DOI: 10.3115/1219840.1219913
|View full text |Cite
|
Sign up to set email alerts
|

Joint learning improves semantic role labeling

Abstract: Despite much recent progress on accurate semantic role labeling, previous work has largely used independent classifiers, possibly combined with separate label sequence models via Viterbi decoding. This stands in stark contrast to the linguistic observation that a core argument frame is a joint structure, with strong dependencies between arguments. We show how to build a joint model of argument frames, incorporating novel features that model these interactions into discriminative loglinear models. This system a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
98
0
4

Year Published

2005
2005
2019
2019

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 98 publications
(105 citation statements)
references
References 13 publications
2
98
0
4
Order By: Relevance
“…Thus it is often beneficial to develop joint models to identify the various elements of a frame (Toutanova et al, 2005). However, these assumptions are less viable when dealing with emotions in tweets.…”
Section: Challenges Of Semantic Role Labeling Of Emotions In Tweetsmentioning
confidence: 99%
“…Thus it is often beneficial to develop joint models to identify the various elements of a frame (Toutanova et al, 2005). However, these assumptions are less viable when dealing with emotions in tweets.…”
Section: Challenges Of Semantic Role Labeling Of Emotions In Tweetsmentioning
confidence: 99%
“…One way to train these global features is to learn a linear classifier that selects a parse / frame pair from the ranked list, in the manner of Collins (2000). Reranking has previously been applied to semantic role labeling by Toutanova et al (2005), from which we use several features. The difference between this paper and Toutanova et al is that instead of reranking k-best SRL frames of a single parse tree, we are reranking 1-best SRL frames from the k-best parse trees.…”
Section: Training a Reranker Using Global Featuresmentioning
confidence: 99%
“…This concept is intuitive when reasoning about the link between syntax and semantics, and it has been used earlier in semantic interpreters such as Absity (Hirst, 1983). However, except from a few tentative experiments (Toutanova et al, 2005), grammatical function is not explicitly used by current automatic SRL systems, but instead emulated from constituent trees by features like the constituent position and the governing category. More generally, these linguistic reasons have made a number of linguists argue that dependency structures are more suitable for explaining the syntax-semantics interface (Mel'čuk, 1988;Hudson, 1984).…”
Section: Introductionmentioning
confidence: 99%