Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1191
|View full text |Cite
|
Sign up to set email alerts
|

A Span Selection Model for Semantic Role Labeling

Abstract: We present a simple and accurate span-based model for semantic role labeling (SRL). Our model directly takes into account all possible argument spans and scores them for each label. At decoding time, we greedily select higher scoring labeled spans. One advantage of our model is to allow us to design and use spanlevel features, that are difficult to use in tokenbased BIO tagging approaches. Experimental results demonstrate that our ensemble model achieves the state-of-the-art results, 87.4 F1 and 87.0 F1 on the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
75
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 85 publications
(76 citation statements)
references
References 39 publications
0
75
0
Order By: Relevance
“…For syntactic chunking, we use a variant of the Reconciled Span Parser (Joshi et al, 2018). For SRL, we use the span selection model (BiLSTM-Span model) (Ouchi et al, 2018). Each model is trained on a source domain training set and was evaluated on a target domain test set 6 .…”
Section: Methodsmentioning
confidence: 99%
“…For syntactic chunking, we use a variant of the Reconciled Span Parser (Joshi et al, 2018). For SRL, we use the span selection model (BiLSTM-Span model) (Ouchi et al, 2018). Each model is trained on a source domain training set and was evaluated on a target domain test set 6 .…”
Section: Methodsmentioning
confidence: 99%
“…Semantic Role Labeling. Span-based SRL only exists on English data (Zhou and Xu, 2015;Strubell et al, 2018;Ouchi et al, 2018). Dependency-based SRL models such as Cai et al, 2018;Li et al, 2019) are the state-of-the-art for English.…”
Section: Related Workmentioning
confidence: 99%
“…Instead, suppose we had a function g(y, L x ) → R + that measures a loss between output y and a grammar L x such that g(y, L x ) = 0 if and only if there are no grammatical errors in y. That is, g(y, L x ) = 0 for the feasible region 1 Since our submission, the previous SOTA (Peters et al 2018) in SRL on which we apply our technique has been advanced by 1.7 F1 points (Ouchi, Shindo, and Matsumoto 2018). However, this is a training time improvement which is orthogonal to our work.…”
Section: Problem Definition and Motivationmentioning
confidence: 99%