Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021) 2021
DOI: 10.18653/v1/2021.spnlp-1.3
|View full text |Cite
|
Sign up to set email alerts
|

Learning compositional structures for semantic graph parsing

Abstract: AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbankspecific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reduc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 13 publications
(17 reference statements)
0
6
0
Order By: Relevance
“…We follow and rely on the dependency parsing model of Kiperwasser and Goldberg (2016), which scores each dependency edge by feeding neural represen- tations for the two tokens to an MLP. We train the parser using the setup of Groschwitz et al (2021), which does not require explicit annotations with AM dependency trees.…”
Section: The Am Parsermentioning
confidence: 99%
See 4 more Smart Citations
“…We follow and rely on the dependency parsing model of Kiperwasser and Goldberg (2016), which scores each dependency edge by feeding neural represen- tations for the two tokens to an MLP. We train the parser using the setup of Groschwitz et al (2021), which does not require explicit annotations with AM dependency trees.…”
Section: The Am Parsermentioning
confidence: 99%
“…Hyperparameters. For the AM parser, we primarily copy hyperparameter values from the AMR experiments of Groschwitz et al (2021). This helps prevent overfitting on COGS, but we also note that hyperparameter tuning for compositional generalization datasets can be difficult anyways since one can typically easily achieve perfect scores on an indoman dev set.…”
Section: A Training Details Of the Am Parsermentioning
confidence: 99%
See 3 more Smart Citations