Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.167
|View full text |Cite
|
Sign up to set email alerts
|

GPT-too: A Language-Model-First Approach for AMR-to-Text Generation

Abstract: Meaning Representations (AMRs) are broad-coverage sentence-level semantic graphs. Existing approaches to generating text from AMR have focused on training sequenceto-sequence or graph-to-sequence models on AMR annotated data only. In this paper, we propose an alternative approach that combines a strong pre-trained language model with cycle consistency-based re-scoring. Despite the simplicity of the approach, our experimental results show these models outperform all previous techniques on the English LDC2017T10… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
94
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 71 publications
(94 citation statements)
references
References 25 publications
0
94
0
Order By: Relevance
“…The Transformer+copy model obtains better performance than the Graph2seq+copy model, as the Transformer architecture is indeed a graph neural network with self-attention as aggregation function over the neighbors and regards the input as a fully-connected graph. Recent works (Lin et al, 2019;Rogers et al, 2020;Mager et al, 2020) have shown that Transformer-based structure can capture hierarchical syntactic structures and graph representations. The GPT-2 model obtains the best performance among all with a significantly larger improvement.…”
Section: Fully-supervised Settingmentioning
confidence: 99%
“…The Transformer+copy model obtains better performance than the Graph2seq+copy model, as the Transformer architecture is indeed a graph neural network with self-attention as aggregation function over the neighbors and regards the input as a fully-connected graph. Recent works (Lin et al, 2019;Rogers et al, 2020;Mager et al, 2020) have shown that Transformer-based structure can capture hierarchical syntactic structures and graph representations. The GPT-2 model obtains the best performance among all with a significantly larger improvement.…”
Section: Fully-supervised Settingmentioning
confidence: 99%
“…Ribeiro et al (2019) conducted a Mechanical Turk evaluation to compare their best graph encoder model with a sequence-to-sequence baseline, finding that their model performs better on both meaning similarity between the generated sentence and the gold reference, and readability of the generated sentence. Mager et al (2020) carry out a human evaluation of overall quality, comparing their GPT-2-based system to three others (Guo et al, 2019;Ribeiro et al, 2019;Zhu et al, 2019), all of which are also evaluated in our experiment. For the three systems included in both our evaluation and theirs, the relative results are comparable; Mager et al (2020) find their own system to be better than all three.…”
Section: Evaluation Of Amr Generationmentioning
confidence: 99%
“…Mager et al (2020) carry out a human evaluation of overall quality, comparing their GPT-2-based system to three others (Guo et al, 2019;Ribeiro et al, 2019;Zhu et al, 2019), all of which are also evaluated in our experiment. For the three systems included in both our evaluation and theirs, the relative results are comparable; Mager et al (2020) find their own system to be better than all three.…”
Section: Evaluation Of Amr Generationmentioning
confidence: 99%
“…These approaches show comparable or better results than early neural models (Konstas et al, 2017). However, recent neural approaches (Song et al, 2018;Zhu et al, 2019;Cai and Lam, 2020;Wang et al, 2020;Mager et al, 2020) have demonstrated the state-of-the-art performances thanks to the use of contextualized embeddings.…”
Section: Related Workmentioning
confidence: 99%