Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1262
|View full text |Cite
|
Sign up to set email alerts
|

A Unified Syntax-aware Framework for Semantic Role Labeling

Abstract: Semantic role labeling (SRL) aims to recognize the predicate-argument structure of a sentence. Syntactic information has been paid a great attention over the role of enhancing SRL. However, the latest advance shows that syntax would not be so important for SRL with the emerging much smaller gap between syntax-aware and syntax-agnostic SRL. To comprehensively explore the role of syntax for SRL task, we extend existing models and propose a unified framework to investigate more effective and more diverse ways of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
37
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 80 publications
(38 citation statements)
references
References 34 publications
1
37
0
Order By: Relevance
“…To model texts into vector space, the input tokens are represented as embeddings in deep learning models [28,29,30,45,46,55,57]. Previous work has shown that word representations in NLP tasks can benefit from character-level models, which aim at learning language representations directly from characters.…”
Section: Related Work a Augmented Embeddingmentioning
confidence: 99%
“…To model texts into vector space, the input tokens are represented as embeddings in deep learning models [28,29,30,45,46,55,57]. Previous work has shown that word representations in NLP tasks can benefit from character-level models, which aim at learning language representations directly from characters.…”
Section: Related Work a Augmented Embeddingmentioning
confidence: 99%
“…Roth and Lapata [27] embed dependency path as syntactic information into a model and exhibited a notable success. Li et al [28] extended existing models and proposed a unified framework to investigate more effective and more diverse ways of incorporating syntax into sequential neural networks. Different from the above works, our work is constructed based on a Bi-LSTM neural model and an adversarial domain classifier is incorporated into the model to enhance the domain-invariant features to emerge.…”
Section: Semantic Role Labelingmentioning
confidence: 99%
“…The problem can be tackled if we can construct a new vector space where the divergence between the two distributions are very small. Some works [28], [29] suggest that the adversarial loss can measure the H-divergence between two distributions. Thus, this paper incorporates adversarial learning to narrow down the divergence between the source distributions and the target.…”
Section: Adversarial Trainingmentioning
confidence: 99%
See 2 more Smart Citations