Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.288
|View full text |Cite
|
Sign up to set email alerts
|

Pushing the Limits of AMR Parsing with Self-Learning

Abstract: Meaning Representation (AMR) parsing has experienced a notable growth in performance in the last two years, due both to the impact of transfer learning and the development of novel architectures specific to AMR. At the same time, self-learning techniques have helped push the performance boundaries of other natural language processing applications, such as machine translation or question answering. In this paper, we explore different ways in which trained models can be applied to improve AMR parsing performance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 21 publications
(23 citation statements)
references
References 19 publications
0
22
1
Order By: Relevance
“…Our small model only trails the base model by a small margin and we achieve high performance on small AMR 1.0 dataset, indicating that our approach benefits from having good inductive bias towards the problem so that the learning is efficient. More remarkably, we even surpass the scores reported in Lee et al (2020) combining various self-learning techniques and utilizing 85K extra sentences for self-annotation (silver data). For the most recent AMR 3.0 dataset, we report our results for future reference.…”
Section: Resultscontrasting
confidence: 59%
See 3 more Smart Citations
“…Our small model only trails the base model by a small margin and we achieve high performance on small AMR 1.0 dataset, indicating that our approach benefits from having good inductive bias towards the problem so that the learning is efficient. More remarkably, we even surpass the scores reported in Lee et al (2020) combining various self-learning techniques and utilizing 85K extra sentences for self-annotation (silver data). For the most recent AMR 3.0 dataset, we report our results for future reference.…”
Section: Resultscontrasting
confidence: 59%
“…We thus use this ensemble to annotate the 85K sentence set used in (Lee et al, 2020). After removing parses with detached nodes we obtained 70K model-annotated silver data sentences.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Xu et al (2020a) improved sequence-to-sequence parsing for AMR by using pre-trained encoders, reaching similar performance to Cai and Lam (2020). introduced a stack-transformer to enhance transitionbased AMR parsing (Ballesteros and Al-Onaizan, 2017), and Lee et al (2020) improved it further, using a trained parser for mining oracle actions and combining it with AMR-to-text generation to outperform the state of the-art. Wang et al (2018) parsed Chinese AMR with a transition-based system.…”
Section: Overview Of Approachesmentioning
confidence: 93%