Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1531
|View full text |Cite
|
Sign up to set email alerts
|

Sequence Labeling Parsing by Learning across Representations

Abstract: We use parsing as sequence labeling as a common framework to learn across constituency and dependency syntactic abstractions. To do so, we cast the problem as multitask learning (MTL). First, we show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm. Secondly, we explore an MTL sequence labeling model that parses both representations, at almost no cost in terms of performance and speed. The results across the board show that on average MTL models wi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 16 publications
(14 citation statements)
references
References 32 publications
0
13
0
Order By: Relevance
“…Finally, the best result for dependency parsing is achieved when adding constituency parsing as auxiliary task (D-MTL-AUX). More experiments on various languages and the reported speeds when including the MTL approach are presented in [12].…”
Section: Resultsmentioning
confidence: 99%
“…Finally, the best result for dependency parsing is achieved when adding constituency parsing as auxiliary task (D-MTL-AUX). More experiments on various languages and the reported speeds when including the MTL approach are presented in [12].…”
Section: Resultsmentioning
confidence: 99%
“…Generally, these methods consider injecting either standalone constituency tree or dependency tree by tree encoders such as TreeL-STM (Socher et al, 2013;Tai et al, 2015a) or GCN (Kipf and Welling, 2017). Based on the assumption that the dependency and constituency representation can be disentangled and coexist in one shared model, existing efforts are paid for joint constituent and dependency parsing, verifying the mutual benefit of these heterogeneous structures (Collins, 1997;Charniak, 2000;Charniak and Johnson, 2005;Farkas et al, 2011;Ren et al, 2013;Yoshikawa et al, 2017;Strzyz et al, 2019;Kato and Matsubara, 2019;Zhou and Zhao, 2019). However, little attention is paid for facilitating the syntax-dependent tasks via integrating heterogeneous syntactic trees.…”
Section: Syntactic Structures For Text Modelingmentioning
confidence: 99%
“…Rei [28] considers that these patterns are useful for improving accuracy on sequence labeling tasks. Strzyz et al [32] use sequence labeling for constituency [11] and dependency parsing [33] combined with multi-task learning to learn across syntactic representations. They show that adding a parsing paradigm as an auxiliary loss consistently improves the performance on the other paradigm.…”
Section: Sequence Labelingmentioning
confidence: 99%
“…What's more, they further demonstrate that a single multi-task learning model following their strategy can robustly produce constituency and dependency trees. This model obtains a performance and speed comparable with previous sequence labeling models for constituency and dependency parsing [32]. Chen and Moschitti [7] propose an approach for transferring the knowledge of a neural model for sequence labeling, learned from the source domain, to a new model trained on a target domain, where new label categories appear.…”
Section: Sequence Labelingmentioning
confidence: 99%