Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing 2018
DOI: 10.18653/v1/d18-1161
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Oracles for Top-Down and In-Order Shift-Reduce Constituent Parsing

Abstract: We introduce novel dynamic oracles for training two of the most accurate known shiftreduce algorithms for constituent parsing: the top-down and in-order transition-based parsers. In both cases, the dynamic oracles manage to notably increase their accuracy, in comparison to that obtained by performing classic static training. In addition, by improving the performance of the state-of-the-art in-order shift-reduce parser, we achieve the best accuracy to date (92.0 F1) obtained by a fullysupervised single-model gr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 23 publications
(32 reference statements)
0
4
0
Order By: Relevance
“…Parser (no tags or predicted PoS tags) PTB Liu and Zhang (2017b) 91.8 Stern, Fried and Klein (2017b) 92.56 Fernández-González and Gómez-Rodríguez (2018) 92.0 Fried and Klein (2018) 92.2 Gaddy, Stern and Klein (2018) 92.08 Teng and Zhang (2018) 92.4 Parsers that use extra dependency information are marked with +dependency, those that ensemble several trained models with +ensemble, those that use a language model for reranking predicted trees with +LM-rerank, those that use additional parsed data with +extra-data, those that use predicted PoS tags as additional input with +PoS and, finally, those that use pre-trained language models BERT Large (Devlin et al, 2019) or XLNet (Yang et al, 2019) for the encoder initialization are marked with +BERT Large /+XLNet. We also include performance on the PTB dev split for all the tested linearizations.…”
Section: Discussionmentioning
confidence: 99%
“…Parser (no tags or predicted PoS tags) PTB Liu and Zhang (2017b) 91.8 Stern, Fried and Klein (2017b) 92.56 Fernández-González and Gómez-Rodríguez (2018) 92.0 Fried and Klein (2018) 92.2 Gaddy, Stern and Klein (2018) 92.08 Teng and Zhang (2018) 92.4 Parsers that use extra dependency information are marked with +dependency, those that ensemble several trained models with +ensemble, those that use a language model for reranking predicted trees with +LM-rerank, those that use additional parsed data with +extra-data, those that use predicted PoS tags as additional input with +PoS and, finally, those that use pre-trained language models BERT Large (Devlin et al, 2019) or XLNet (Yang et al, 2019) for the encoder initialization are marked with +BERT Large /+XLNet. We also include performance on the PTB dev split for all the tested linearizations.…”
Section: Discussionmentioning
confidence: 99%
“…Dynamic oracles have been developed for different parsing tasks (Goldberg and Nivre, 2012;Goldberg et al, 2014;Coavoux and Crabbé, 2016;Fernández-González and Gómez-Rodríguez, 2018b;Coavoux and Cohen, 2019;Gómez-Rodríguez and Fernández-González, 2015) and have been shown to improve parsing performance (Ballesteros et al, 2016;Goldberg and Nivre, 2012;Coavoux and Crabbé, 2016;Fernández-González and Gómez-Rodríguez, 2018b). These oracles work for specific output types and losses.…”
Section: Contributionmentioning
confidence: 99%
“…A predecessor of the work by Stern et al (2017) is the paper by (Cross and Huang, 2016) which discusses a shift-reduce system for constituency parsing and gives a constant time dynamic oracle for this system. It would be possible to express their setting, as well as those of Coavoux and Crabbé (2016), Fernández-González and Gómez-Rodríguez (2018b) and the discourse parsing focused on of Hung et al (2020) in our framework.…”
Section: Related Workmentioning
confidence: 99%
“…The recent development of constituent parsing has focused on attention-enhanced neural models (Vinyals et al 2015;Kuncoro et al 2017;Kitaev and Klein 2018;Kitaev et al 2019) and domainindependent agnostic models (Vinyals et al 2015;Fried, Kitaev, and Klein 2019). Reranking (Charniak and Johnson 2005) has been tested for neural constituent parsing (Fried, Stern, and Klein 2017), and dynamic oracles for dependency parsing (Goldberg, Sartorio, and Satta 2014) have also been applied to constituent parsing (Fernández-González and Gómez-Rodríguez 2018;Fried and Klein 2018). Using the Berkeley parser (Petrov and Klein 2007), the Trance parser (Watanabe and Sumita 2015) and Berkeley neural parser (Kitaev and Klein 2018;Kitaev et al 2019), we train and evaluate the phrase structure Korean Sejong treebank.…”
Section: Goal Of the Papermentioning
confidence: 99%