Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1237
|View full text |Cite
|
Sign up to set email alerts
|

Graph-based Dependency Parsing with Graph Neural Networks

Abstract: We investigate the problem of efficiently incorporating high-order features into neural graph-based dependency parsing. Instead of explicitly extracting high-order features from intermediate parse trees, we develop a more powerful dependency tree node representation which captures high-order information concisely and efficiently. We use graph neural networks (GNNs) to learn the representations and discuss several new configurations of GNN's updating and aggregation functions. Experiments on PTB show that our p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 57 publications
(51 citation statements)
references
References 29 publications
(37 reference statements)
0
50
0
Order By: Relevance
“…Finally, we conduct experiments on Universal Dependencies (UD) v2.2 and v2.3 following Ji et al (2019) and respectively. We adopt the 300d multilingual pretrained word embeddings used in Zeman et al (2018) and take the CharLSTM representations as input.…”
Section: Methodsmentioning
confidence: 99%
“…Finally, we conduct experiments on Universal Dependencies (UD) v2.2 and v2.3 following Ji et al (2019) and respectively. We adopt the 300d multilingual pretrained word embeddings used in Zeman et al (2018) and take the CharLSTM representations as input.…”
Section: Methodsmentioning
confidence: 99%
“…Equations ( 4) and ( 5) constitute the mechanism by which each iteration of refinement can condition on the previous graph. Instead of the more common approach of hard-coding some attention heads to represent a relation (e.g., Ji et al, 2019), all attention heads can learn for themselves how to use the information about relations.…”
Section: Self-attention Mechanismmentioning
confidence: 99%
“…Akin to graph-based parsers (Ji et al, 2019;Zhang et al, 2019), our model generates parse structures in the form of graphs. In our case, how-ever, graph nodes correspond to syntactic primitives (atomic types & dependencies) rather than words, while the discovery of the graph structure is subject to hard constraints imposed by the decoder's output.…”
Section: Related Workmentioning
confidence: 99%