Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1014
|View full text |Cite
|
Sign up to set email alerts
|

Neural AMR: Sequence-to-Sequence Models for Parsing and Generation

Abstract: Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millions of unlabeled sentences and careful preprocessing of the AMR graphs. For AMR parsing, our model achieves competitive… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
411
3
1

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 243 publications
(420 citation statements)
references
References 28 publications
5
411
3
1
Order By: Relevance
“…Delexicalisation seems to improve results, corroborating the findings from Konstas et al (2017). While Delexicalisation is harmful and Compression is beneficial for PBMT, we see the opposite in NMT models.…”
Section: Discussionsupporting
confidence: 85%
See 4 more Smart Citations
“…Delexicalisation seems to improve results, corroborating the findings from Konstas et al (2017). While Delexicalisation is harmful and Compression is beneficial for PBMT, we see the opposite in NMT models.…”
Section: Discussionsupporting
confidence: 85%
“…Besides the differences between these two MT architectures, applying preordering in the Linearisation step improves results in both cases. This seems to contradict the finding in Konstas et al (2017) regarding neural models. We conjecture that the additional training data used by Konstas et al (2017) may have decreased the gap between using and not using preordering (see also below).…”
Section: Discussioncontrasting
confidence: 74%
See 3 more Smart Citations