2005
DOI: 10.1007/s11063-004-3423-4
|View full text |Cite
|
Sign up to set email alerts
|

Broad-Coverage Parsing with Neural Networks

Abstract: Subsymbolic systems have been successfully used to model several aspects of human language processing. Such parsers are appealing because they allow revising the interpretation as words are incrementally processed.Yet, it has been very hard to scale them up to realistic language due to training time, limited memory, and the difficulty of representing linguistic structure. In this study, we show that it is possible to keep track of long-distance dependencies and to parse into deeper structures than before based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 19 publications
0
1
0
Order By: Relevance
“…More recently, machine learning-based DP methods have been proposed for automatically classifying configurations into transition types for dependency relation extraction. Some of these efforts have focused on developing probabilistic models (e.g., Wang and Harper 2004;Collins 2003;Samuelsson 2000;Eisner 1996), while others have proposed discriminative approaches with support vector machines (e.g., Kudo and Matsumoto 2002;Yamada and Matsumoto 2003), beam search-based perceptron (e.g., Zhang and Nivre 2011; Zhang and Clark 2008), dynamic programming-based perceptron (e.g., Huang and Sagae 2010), or neural networks (e.g., Mayberry and Miikkulainen 2005;Henderson 2004). In recent years, there have been an increasing number of research efforts focusing on NN-based DP methods (e.g., Dozat and Manning 2018;Strubell and McCallum 2017;Nguyen et al 2017;Dozat and Manning 2017;Hashimoto et al 2017;Kuncoro et al 2017;Kiperwasser and Goldberg 2016;Cheng et al 2016;Yazdani and Henderson 2015;Alberti et al 2015;Weiss et al 2015;Dyer et al 2015;Chen and Manning 2014).…”
Section: Machine Learning-based Dependency Parsing Methodsmentioning
confidence: 99%
“…More recently, machine learning-based DP methods have been proposed for automatically classifying configurations into transition types for dependency relation extraction. Some of these efforts have focused on developing probabilistic models (e.g., Wang and Harper 2004;Collins 2003;Samuelsson 2000;Eisner 1996), while others have proposed discriminative approaches with support vector machines (e.g., Kudo and Matsumoto 2002;Yamada and Matsumoto 2003), beam search-based perceptron (e.g., Zhang and Nivre 2011; Zhang and Clark 2008), dynamic programming-based perceptron (e.g., Huang and Sagae 2010), or neural networks (e.g., Mayberry and Miikkulainen 2005;Henderson 2004). In recent years, there have been an increasing number of research efforts focusing on NN-based DP methods (e.g., Dozat and Manning 2018;Strubell and McCallum 2017;Nguyen et al 2017;Dozat and Manning 2017;Hashimoto et al 2017;Kuncoro et al 2017;Kiperwasser and Goldberg 2016;Cheng et al 2016;Yazdani and Henderson 2015;Alberti et al 2015;Weiss et al 2015;Dyer et al 2015;Chen and Manning 2014).…”
Section: Machine Learning-based Dependency Parsing Methodsmentioning
confidence: 99%