Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1392
|View full text |Cite
|
Sign up to set email alerts
|

Broad-Coverage Semantic Parsing as Transduction

Abstract: We unify different broad-coverage semantic parsing tasks under a transduction paradigm, and propose an attention-based neural framework that incrementally builds a meaning representation via a sequence of semantic relations. By leveraging multiple attention mechanisms, the transducer can be effectively trained without relying on a pre-trained aligner. Experiments conducted on three separate broadcoverage semantic parsing tasks -AMR, SDP and UCCA -demonstrate that our attentionbased neural transducer improves t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
54
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 59 publications
(56 citation statements)
references
References 54 publications
1
54
0
Order By: Relevance
“…The authors argue that simultaneous learning of alignment and parses benefits the parsing in the sense that alignment is directly informed by the parsing objective thus producing overall better alignments. Zhang et al (2019a) and (Zhang et al, 2019b) recently reported results that outperform all previously reported SMATCH scores, on both AMR 2.0 and AMR 1.0. The proposed attention-based model is aligner-free and deals with AMR parsing as sequence-to-graph task.…”
Section: Related Workmentioning
confidence: 74%
“…The authors argue that simultaneous learning of alignment and parses benefits the parsing in the sense that alignment is directly informed by the parsing objective thus producing overall better alignments. Zhang et al (2019a) and (Zhang et al, 2019b) recently reported results that outperform all previously reported SMATCH scores, on both AMR 2.0 and AMR 1.0. The proposed attention-based model is aligner-free and deals with AMR parsing as sequence-to-graph task.…”
Section: Related Workmentioning
confidence: 74%
“…The tutorial will describe the guidelines and rationale behind UCCA, helping potential application designers understand what abstractions it makes. Significant effort has been devoted to building UCCA parsers (Hershcovich et al, 2017;Hershcovich et al, 2018;Jiang et al, 2019;Lyu et al, 2019;Tuan Nguyen and Tran, 2019;Taslimipoor et al, 2019;Marzinotto et al, 2019;Pütz and Glocker, 2019;Yu and Sagae, 2019;Zhang et al, 2019a;Hershcovich and Arviv, 2019;Donatelli et al, 2019;Che et al, 2019;Bai and Zhao, 2019;Lai et al, 2019;Koreeda et al, 2019;Straka and Straková, 2019;Cao et al, 2019;Zhang et al, 2019b;Droganova et al, 2019;Chen et al, 2019;Arviv et al, 2020;Samuel and Straka, 2020;Dou et al, 2020), including a SemEval 2019 shared task on cross-lingual UCCA parsing (Hershcovich et al, 2019b), which had 8 participating teams, as well as CoNLL 2019 and CoNLL 2020 shared tasks on cross-framework and cross-lingual meaning representation parsing Oepen et al, 2020), where 12 and 4 teams, respectively, submitted parsed UCCA graphs. This tutorial will allow researchers interested in UCCA parsing, and more generally graph parsing, deepen their understanding of the framework, and what properties make it unique.…”
Section: Relevancementioning
confidence: 99%
“…Parsing (25m). TUPA (Hershcovich et al, 2017;Hershcovich et al, 2018;Hershcovich and Arviv, 2019;Arviv et al, 2020), SemEval 2019 Task 1 (Hershcovich et al, 2019b;Jiang et al, 2019), CoNLL 2019 and CoNLL 2020 Shared Tasks Oepen et al, 2020), and more recent parsers (Zhang et al, 2019a).…”
Section: Relation To Other Representations (15m)mentioning
confidence: 99%
“…Over the past few years, the accuracy of neural semantic parsers which parse English sentences into graph-based semantic representations has increased substantially (Dozat and Manning, 2018;Zhang et al, 2019;He and Choi, 2020;Cai and Lam, 2020). Most of these parsers use a neural model which can freely predict node labels and edges, and most of them are tailored to a specific type of graphbank.…”
Section: Introductionmentioning
confidence: 99%