Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2018
DOI: 10.18653/v1/p18-1191
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale QA-SRL Parsing

Abstract: We present a new large-scale corpus of Question-Answer driven Semantic Role Labeling (QA-SRL) annotations, and the first high-quality QA-SRL parser. Our corpus, QA-SRL Bank 2.0, consists of over 250,000 question-answer pairs for over 64,000 sentences across 3 domains and was gathered with a new crowd-sourcing scheme that we show has high precision and good recall at modest cost. We also present neural models for two QA-SRL subtasks: detecting argument spans for a predicate and generating questions to label the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
115
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 70 publications
(118 citation statements)
references
References 21 publications
3
115
0
Order By: Relevance
“…In this more realistic setting, where the predicate must be predicted, our model achieves state-of-the-art performance on PropBank. It also reinforces the strong performance of similar span embedding methods for coreference , suggesting that this style of models could be used for other span-span relation tasks, such as syntactic parsing (Stern et al, 2017), relation extraction (Miwa and Bansal, 2016), and QA-SRL (FitzGerald et al, 2018). We consider the space of possible predicates to be all the tokens in the input sentence, and the space of arguments to be all continuous spans.Our model decides what relation exists between each predicate-argument pair (including no relation).…”
Section: Introductionmentioning
confidence: 58%
“…In this more realistic setting, where the predicate must be predicted, our model achieves state-of-the-art performance on PropBank. It also reinforces the strong performance of similar span embedding methods for coreference , suggesting that this style of models could be used for other span-span relation tasks, such as syntactic parsing (Stern et al, 2017), relation extraction (Miwa and Bansal, 2016), and QA-SRL (FitzGerald et al, 2018). We consider the space of possible predicates to be all the tokens in the input sentence, and the space of arguments to be all continuous spans.Our model decides what relation exists between each predicate-argument pair (including no relation).…”
Section: Introductionmentioning
confidence: 58%
“…Recent work has focused on cheaply eliciting quality annotations from nonexperts through crowdsourcing (He et al, 2016;Iyer et al, 2017;. FitzGerald et al (2018) facilitated non-expert annotation by introducing a formalism expressed in natural language for semantic-role-labeling. This mirrors QDMR, as both are expressed in natural language.…”
Section: Related Workmentioning
confidence: 99%
“…We evaluate the potential of the oracle policy on QA-SRL Bank 2.0 (FitzGerald et al, 2018). We use the training set of the science domain as D, randomly split it into S lab , S unlab , and S eval .…”
Section: Experimental Settingsmentioning
confidence: 99%
“…We compare the results of a base oracle policy (BASEORACLE) corresponding to the best policy we were able to obtain using the architecture from FitzGerald et al (2018) to the following baselines:…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation