Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1065
|View full text |Cite
|
Sign up to set email alerts
|

AMR Parsing with an Incremental Joint Model

Abstract: To alleviate the error propagation in the traditional pipelined models for Abstract Meaning Representation (AMR) parsing, we formulate AMR parsing as a joint task that performs the two subtasks: concept identification and relation identification simultaneously. To this end, we first develop a novel componentwise beam search algorithm for relation identification in an incremental fashion, and then incorporate the decoder into a unified framework based on multiple-beam search, which allows for the bi-directional… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
42
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 41 publications
(42 citation statements)
references
References 17 publications
0
42
0
Order By: Relevance
“…AMR parsing thus requires solving several natural language processing tasks; named entity recognition, word sense disambiguation and joint syntactic and semantic role labeling. AMR parsing has acquired a lot of attention (Wang et al, 2015a;Zhou et al, 2016;Wang et al, 2015b;Goodman et al, 2016;Guo and Lu, 2018;Lyu and Titov, 2018;Vilares and Gómez-Rodríguez, 2018;Zhang et al, 2019) in recent years.…”
Section: Introductionmentioning
confidence: 99%
“…AMR parsing thus requires solving several natural language processing tasks; named entity recognition, word sense disambiguation and joint syntactic and semantic role labeling. AMR parsing has acquired a lot of attention (Wang et al, 2015a;Zhou et al, 2016;Wang et al, 2015b;Goodman et al, 2016;Guo and Lu, 2018;Lyu and Titov, 2018;Vilares and Gómez-Rodríguez, 2018;Zhang et al, 2019) in recent years.…”
Section: Introductionmentioning
confidence: 99%
“…Transition-based techniques are a natural starting point for UCCA parsing, given the conceptual similarity of UCCA's distinctions, centered around predicate-argument structures, to distinctions expressed by dependency schemes, and the achievements of transition-based methods in dependency parsing (Dyer et al, 2015;Andor et al, 2016;Kiperwasser and Goldberg, 2016). We are further motivated by the strength of transition-based methods in related tasks, including dependency graph parsing (Sagae and Tsujii, 2008;Ribeyre et al, 2014;Tokgöz and Eryigit, 2015), constituency parsing (Sagae and Lavie, 2005;Zhang and Clark, 2009;Zhu et al, 2013;Maier, 2015;Maier and Lichte, 2016), AMR parsing (Wang et al, 2015a(Wang et al, ,b, 2016Misra and Artzi, 2016;Goodman et al, 2016;Zhou et al, 2016;Damonte et al, 2017) and CCG parsing (Zhang and Clark, 2011;Ambati et al, 2015Ambati et al, , 2016.…”
Section: Introductionmentioning
confidence: 99%
“…Pust et al (2015) formulates AMR parsing as a machine translation problem in which the sentence is the source language input and the AMR is the target language output. AMR parsing systems that focus on modeling the graph aspect of the AMR includes JAMR (Flanigan et al, 2014(Flanigan et al, , 2016aZhou et al, 2016), which treats AMR parsing as a procedure for searching for the Maximum Spanning Connected Subgraphs (MSCGs) from an edge-labeled, directed graph of all possible relations. Parsers based on Hyperedge Replacement Grammars (HRG) (Chiang et al, 2013;Björklund et al, 2016;Groschwitz et al, 2015) put more emphasis on modeling the formal properties of the AMR graph.…”
Section: Related Workmentioning
confidence: 99%
“…We include the previous best results on this dataset. The parser proposed in (Zhou et al, 2016) jointly learns the concept and relation through an incremental joint model. We also include the AMR parser by (Pust et al, 2015) that models AMR parsing as a machine translation task and incorporates various external resources.…”
Section: Comparison With Other Parsersmentioning
confidence: 99%
See 1 more Smart Citation