Proceedings of the Shared Task on Cross-Framework Meaning Representation Parsing at the 2019 Conference on Natural Language Lea 2019
DOI: 10.18653/v1/k19-2016
|View full text |Cite
|
Sign up to set email alerts
|

Peking at MRP 2019: Factorization- and Composition-Based Parsing for Elementary Dependency Structures

Abstract: We design, implement and evaluate two semantic parsers, which represent factorizationand composition-based approaches respectively, for Elementary Dependency Structures (EDS) at the CoNLL 2019 Shared Task on Cross-Framework Meaning Representation Parsing. The detailed evaluation of the two parsers gives us a new perception about parsing into linguistically enriched meaning representations: current neural EDS parsers are able to reach an accuracy at the interannotator agreement level in the same-epochand-domain… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(10 citation statements)
references
References 31 publications
(26 reference statements)
0
10
0
Order By: Relevance
“…For EDS, the strongest results were obtained in the MRP 2019 official competition by SUDA-Alibaba (Zhang et al, 2019c). However, in the post-evaluation stage, they were outperformed by the Peking system (Chen et al, 2019). Both used factorization-based parsing with pre-trained contextualized language model embeddings (which has consistently proved to be very effective for other frameworks too).…”
Section: Overview Of Approachesmentioning
confidence: 99%
“…For EDS, the strongest results were obtained in the MRP 2019 official competition by SUDA-Alibaba (Zhang et al, 2019c). However, in the post-evaluation stage, they were outperformed by the Peking system (Chen et al, 2019). Both used factorization-based parsing with pre-trained contextualized language model embeddings (which has consistently proved to be very effective for other frameworks too).…”
Section: Overview Of Approachesmentioning
confidence: 99%
“…The table compares ERG parsing results to a selection of 'real' submissions to the shared task, viz. the top performers within each framework and for the task overall: HIT-SCIR (Che et al, 2019), Peking (Chen et al, 2019) 3 , SJTU-NICT (Bai and Zhao, 2019), and SUDA-Alibaba (Zhang et al, 2019). In contrast to the ERG parser, all of these systems are purely data-driven, in the sense that they do not incorporate manually curated linguistic knowledge (beyond finite-state tokenization rules, maybe) but rather learn all their parameters exclusively from the shared task training data.…”
Section: Resultsmentioning
confidence: 99%
“…The tutorial will describe the guidelines and rationale behind UCCA, helping potential application designers understand what abstractions it makes. Significant effort has been devoted to building UCCA parsers (Hershcovich et al, 2017;Hershcovich et al, 2018;Jiang et al, 2019;Lyu et al, 2019;Tuan Nguyen and Tran, 2019;Taslimipoor et al, 2019;Marzinotto et al, 2019;Pütz and Glocker, 2019;Yu and Sagae, 2019;Zhang et al, 2019a;Hershcovich and Arviv, 2019;Donatelli et al, 2019;Che et al, 2019;Bai and Zhao, 2019;Lai et al, 2019;Koreeda et al, 2019;Straka and Straková, 2019;Cao et al, 2019;Zhang et al, 2019b;Droganova et al, 2019;Chen et al, 2019;Arviv et al, 2020;Samuel and Straka, 2020;Dou et al, 2020), including a SemEval 2019 shared task on cross-lingual UCCA parsing (Hershcovich et al, 2019b), which had 8 participating teams, as well as CoNLL 2019 and CoNLL 2020 shared tasks on cross-framework and cross-lingual meaning representation parsing Oepen et al, 2020), where 12 and 4 teams, respectively, submitted parsed UCCA graphs. This tutorial will allow researchers interested in UCCA parsing, and more generally graph parsing, deepen their understanding of the framework, and what properties make it unique.…”
Section: Relevancementioning
confidence: 99%