Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.101
|View full text |Cite
|
Sign up to set email alerts
|

Co-training an Unsupervised Constituency Parser with Weak Supervision

Abstract: We introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence. There are two types of classifiers, an inside classifier that acts on a span, and an outside classifier that acts on everything outside of a given span. Through self-training and co-training with the two classifiers, we show that the interplay between them helps improve the accuracy of both, and as a result, effectively parse. A seed bootstrapping technique prep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 35 publications
0
1
0
Order By: Relevance
“…Past work has proposed incorporating other data structures specialized for hierarchical patterns into neural networks, including context-free grammars (Kim et al, 2019a,b;Kim, 2021), trees (Tai et al, 2015;Zhu et al, 2015;Kim et al, 2017;Choi et al, 2018;Havrylov et al, 2019;Corro and Titov, 2019;Xu et al, 2021), chart parsers (Le and Zuidema, 2015;Maillard et al, 2017;Drozdov et al, 2019;Maveli and Cohen, 2022), and transition-based parsers (Dyer et al, 2015;Bowman et al, 2016;Dyer et al, 2016;Shen et al, 2019a).…”
Section: Superpositionmentioning
confidence: 99%
“…Past work has proposed incorporating other data structures specialized for hierarchical patterns into neural networks, including context-free grammars (Kim et al, 2019a,b;Kim, 2021), trees (Tai et al, 2015;Zhu et al, 2015;Kim et al, 2017;Choi et al, 2018;Havrylov et al, 2019;Corro and Titov, 2019;Xu et al, 2021), chart parsers (Le and Zuidema, 2015;Maillard et al, 2017;Drozdov et al, 2019;Maveli and Cohen, 2022), and transition-based parsers (Dyer et al, 2015;Bowman et al, 2016;Dyer et al, 2016;Shen et al, 2019a).…”
Section: Superpositionmentioning
confidence: 99%