Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing 2021
DOI: 10.18653/v1/2021.emnlp-main.395
|View full text |Cite
|
Sign up to set email alerts
|

Improved Latent Tree Induction with Distant Supervision via Span Constraints

Abstract: For over thirty years, researchers have developed and analyzed methods for latent tree induction as an approach for unsupervised syntactic parsing. Nonetheless, modern systems still do not perform well enough compared to their supervised counterparts to have any practical use as structural annotation of text. In this work, we present a technique that uses distant supervision in the form of span constraints (i.e. phrase bracketing) to improve performance in unsupervised constituency parsing. Using a relatively … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 59 publications
0
3
0
Order By: Relevance
“…Linguistic Structures When z is a linguistic structure, this method enables unsupervised or semisupervised parsing. Drozdov et al (2019Drozdov et al ( , 2020 and Xu et al (2021) used a continuous relaxation based approach for unsupervised and distantly-supervised parsing. Shen et al (2018) softly and latently attends to a word's syntactic siblings using proximity values induced by a "syntactic distance" for language modeling and unsupervised constituency parsing.…”
Section: Auto-encodermentioning
confidence: 99%
“…Linguistic Structures When z is a linguistic structure, this method enables unsupervised or semisupervised parsing. Drozdov et al (2019Drozdov et al ( , 2020 and Xu et al (2021) used a continuous relaxation based approach for unsupervised and distantly-supervised parsing. Shen et al (2018) softly and latently attends to a word's syntactic siblings using proximity values induced by a "syntactic distance" for language modeling and unsupervised constituency parsing.…”
Section: Auto-encodermentioning
confidence: 99%
“…Prior work has used a similar CKY-style neural architecture for modeling unsupervised syntactic parsing (Drozdov et al, 2019;Xu et al, 2021b). These models are specific to unsupervised parsing and not directly applicable to supervised methods.…”
mentioning
confidence: 99%
“…Past work has proposed incorporating other data structures specialized for hierarchical patterns into neural networks, including context-free grammars (Kim et al, 2019a,b;Kim, 2021), trees (Tai et al, 2015;Zhu et al, 2015;Kim et al, 2017;Choi et al, 2018;Havrylov et al, 2019;Corro and Titov, 2019;Xu et al, 2021), chart parsers (Le and Zuidema, 2015;Maillard et al, 2017;Drozdov et al, 2019;Maveli and Cohen, 2022), and transition-based parsers (Dyer et al, 2015;Bowman et al, 2016;Dyer et al, 2016;Shen et al, 2019a).…”
Section: Superpositionmentioning
confidence: 99%