Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1538
|View full text |Cite
|
Sign up to set email alerts
|

Towards Semi-Supervised Learning for Deep Semantic Role Labeling

Abstract: Neural models have shown several state-ofthe-art performances on Semantic Role Labeling (SRL). However, the neural models require an immense amount of semantic-role corpora and are thus not well suited for lowresource languages or domains. The paper proposes a semi-supervised semantic role labeling method that outperforms the state-ofthe-art in limited SRL training corpora. The method is based on explicitly enforcing syntactic constraints by augmenting the training objective with a syntactic-inconsistency loss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…For BIO tags, the set V comprises all taggings that don't include an I after an O, and the maximization problem can be solved in linear time using Viterbi decoding (Viterbi, 1967) as in Yao et al (2013); Mehta et al (2018). For IO tags, all taggings are valid, and maximization is done by predicting the tag with highest probability in each token independently.…”
Section: Decoding Spans From a Taggingmentioning
confidence: 99%
“…For BIO tags, the set V comprises all taggings that don't include an I after an O, and the maximization problem can be solved in linear time using Viterbi decoding (Viterbi, 1967) as in Yao et al (2013); Mehta et al (2018). For IO tags, all taggings are valid, and maximization is done by predicting the tag with highest probability in each token independently.…”
Section: Decoding Spans From a Taggingmentioning
confidence: 99%
“…Accuracy in each of the three tasks was improved by respecting constraints. Additionally, for SRL, we employed GBI on a model trained with similar constraint enforcing loss as GBI's (Mehta*, Lee*, and Carbonell 2018), and observe that the additional test-time optimization of GBI still significantly improves the model output whereas A * does not. We believe this is because GBI searches in the proximity of the provided model weights; however, theoretical analysis of this hypothesis is left as a future work.…”
Section: Discussionmentioning
confidence: 99%
“…This field contains a plethora of different neural symbolic methods and techniques. The methods that closely relate to our line of work seek to enforce constraints on the output of a neural network (Hu et al, 2016;Donadello et al, 2017;Diligenti et al, 2017;Mehta et al, 2018;Xu et al, 2018;Nandwani et al, 2019). For a more in-depth introduction, we refer the reader to these excellent recent surveys: Besold et al (2017) andDe Raedt et al (2020).…”
Section: Related Workmentioning
confidence: 99%