Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confere 2015
DOI: 10.3115/v1/p15-1095
|View full text |Cite
|
Sign up to set email alerts
|

Robust Subgraph Generation Improves Abstract Meaning Representation Parsing

Abstract: The Abstract Meaning Representation (AMR) is a representation for opendomain rich semantics, with potential use in fields like event extraction and machine translation. Node generation, typically done using a simple dictionary lookup, is currently an important limiting factor in AMR parsing. We propose a small set of actions that derive AMR subgraphs by transformations on spans of text, which allows for more robust learning of this stage. Our set of construction actions generalize better than the previous appr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
46
0
1

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 40 publications
(47 citation statements)
references
References 21 publications
0
46
0
1
Order By: Relevance
“…While sharing much of this work's motivation, not anchoring the representation in the text complicates the parsing task, as it requires the alignment to be automatically (and imprecisely) detected. Indeed, despite considerable technical effort Pourdamghani et al, 2014;Werling et al, 2015), concept identification is only about 80%-90% accurate. Furthermore, anchoring allows breaking down sentences into semantically meaningful sub-spans, which is useful for many applications (Fernández-González and Martins, 2015;Birch et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…While sharing much of this work's motivation, not anchoring the representation in the text complicates the parsing task, as it requires the alignment to be automatically (and imprecisely) detected. Indeed, despite considerable technical effort Pourdamghani et al, 2014;Werling et al, 2015), concept identification is only about 80%-90% accurate. Furthermore, anchoring allows breaking down sentences into semantically meaningful sub-spans, which is useful for many applications (Fernández-González and Martins, 2015;Birch et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…(Wang et al, 2015;Vanderwende et al, 2015;Peng et al, 2015;Pust et al, 2015;Artzi et al, 2015;Flanigan et al, 2014;Werling et al, 2015). In contrast, we follow the spirit of minimal feature extraction using pre-trained word embeddings, as in (Collobert et al, 2011) and a recurrent network architecture similar to that described in (Zhou and Xu, 2015).…”
Section: Related Workmentioning
confidence: 99%
“…When separating the alignments into roles (edge labels) and non-roles (concepts), F 1 scores are 49.3% and 89.8%, respectively. In Werling's AMR parser (Werling et al, 2015), they conceive of the alignment task as a linear programming relaxation of a boolean problem. The objective function is to maximize the sum of action reliability.…”
Section: Amr-english Sentence Alignermentioning
confidence: 99%