2017
DOI: 10.1162/tacl_a_00070
|View full text |Cite
|
Sign up to set email alerts
|

In-Order Transition-based Constituent Parsing

Abstract: Both bottom-up and top-down strategies have been used for neural transition-based constituent parsing. The parsing strategies differ in terms of the order in which they recognize productions in the derivation tree, where bottom-up strategies and top-down strategies take post-order and pre-order traversal over trees, respectively. Bottom-up parsers benefit from rich features from readily built partial parses, but lack lookahead guidance in the parsing process; top-down parsers benefit from non-local guidance fo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 65 publications
(66 citation statements)
references
References 21 publications
0
61
0
Order By: Relevance
“…LP F1 Wang et al (2015) 83.2 84.6 Liu and Zhang (2017b) 85.9 85.2 85.5 Liu and Zhang (2017a) 86.1 Shen et al (2018) 86.6 86.4 86.5 Fried and Klein (2018) 87.0 Teng and Zhang (2018) 87.1 87.5 87.3 Kitaev and Klein (2018b)…”
Section: Lrmentioning
confidence: 99%
“…LP F1 Wang et al (2015) 83.2 84.6 Liu and Zhang (2017b) 85.9 85.2 85.5 Liu and Zhang (2017a) 86.1 Shen et al (2018) 86.6 86.4 86.5 Fried and Klein (2018) 87.0 Teng and Zhang (2018) 87.1 87.5 87.3 Kitaev and Klein (2018b)…”
Section: Lrmentioning
confidence: 99%
“…In this section we describe our integration of the BERT encoder into the In-Order parser decoder. We refer to the original In-Order (Liu and Zhang, 2017) and BERT (Devlin et al, 2019) papers for full details about the model architectures, only describing the modifications we make at the interface between the two. Code and pre-trained models for this integrated parser are publicly available.…”
Section: A Appendixmentioning
confidence: 99%
“…Is the success of neural constituency parsers (Henderson 2004;Vinyals et al 2015;Dyer et al 2016;Cross and Huang 2016;Choe and Charniak 2016;Liu and Zhang 2017;Kitaev and Klein 2018, inter alia) similarly transferable to out-of-domain treebanks? In this work, we focus on zero-shot generalization: training parsers on a single treebank (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…To perform a fair comparison, we define the novel dynamic oracles on the original implementations of the top-down parser by and in-order parser by Liu and Zhang (2017a), where parsers are trained with a traditional static oracle. Both implementations follow a stack-LSTM approach to represent the stack and the buffer, as well as a vanilla LSTM to represent the action history.…”
Section: Neural Modelmentioning
confidence: 99%
“…On the other hand, Liu and Zhang (2017a) recently developed a novel strategy that finds a compromise between the strengths of top-down and bottom-up approaches, resulting in state-of-the-art accuracy. Concretely, this parser builds the tree following an in-order traversal: instead of starting the tree from the top, it chooses the non-terminal of the resulting subtree after having the first child node in the stack.…”
Section: Introductionmentioning
confidence: 99%