Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-1113
|View full text |Cite
|
Sign up to set email alerts
|

Transition-based Neural Constituent Parsing

Abstract: Constituent parsing is typically modeled by a chart-based algorithm under probabilistic context-free grammars or by a transition-based algorithm with rich features. Previous models rely heavily on richer syntactic information through lexicalizing rules, splitting categories, or memorizing long histories. However enriched models incur numerous parameters and sparsity issues, and are insufficient for capturing various syntactic phenomena. We propose a neural network structure that explicitly models the unbounded… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
85
0
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(86 citation statements)
references
References 32 publications
0
85
0
1
Order By: Relevance
“…The effectiveness of neural features has also been studied for this framework Watanabe and Sumita, 2015;Andor et al, 2016). We apply the transition-based neural framework to disfluency detection, which to our knowledge has not been investigated before.…”
Section: Related Workmentioning
confidence: 99%
“…The effectiveness of neural features has also been studied for this framework Watanabe and Sumita, 2015;Andor et al, 2016). We apply the transition-based neural framework to disfluency detection, which to our knowledge has not been investigated before.…”
Section: Related Workmentioning
confidence: 99%
“…Global optimization has achieved success for several NLP tasks under the neural setting (Zhou et al, 2015;Watanabe and Sumita, 2015). For relation extraction, global learning gives the best performances under the discrete setting (Li and Ji, 2014;Miwa and Sasaki, 2014).…”
Section: Global Optimizationmentioning
confidence: 99%
“…As has been commonly understood, learning local decisions for structured prediction can lead to label bias (Lafferty et al, 2001), which prevents globally optimal structures from receiving optimal scores by the model. We address this potential issue by building a structural neural model for end-to-end relation extraction, following a recent line of efforts on globally optimized models for neural structured prediction (Zhou et al, 2015;Watanabe and Sumita, 2015;Andor et al, 2016;Wiseman and Rush, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…We (Zhou et al, 2015) use a globally normalized training objective for modeling the whole action sequence, while Weiss et al only update the parameters of the perceptron layer by employing a perceptronbased objective. Watanabe and Sumita (2015), Xu, Auli, and Clark (2016) and Wiseman and Rush (2016) propose a different structured-prediction neural models with global optimization upon a sequence of actions. Watanabe and Sumita use the neural structured model and beam search on the constituent parsing task.…”
Section: Neural Structured-prediction Parsing Modelsmentioning
confidence: 99%