We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph. This allows us to use standard neural techniques for supertagging and dependency tree parsing, constrained by a linguistically principled type system. We present two approximative decoding algorithms, which achieve state-of-the-art accuracy and outperform strong baselines. Related WorkRecently, AMR parsing has generated considerable research activity, due to the availability of large-
We describe the Saarland University submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference on Computational Natural Language Learning (CoNLL).
It has been posited that Spellout, like other syntactic operations, can occur more than once (Bresnan 1971)). I suggest that what we call Spellout is in fact at least two separate operations: at minimum, one which determines the linear order of Lexical Items ("linearise") and another which sends the phonological features of the Spelled-out constituent to the phonological component of the language faculty, rendering that constituent unavailable to the syntactic derivation ("atomise"). The separate application of these operations can yield phenomena such as Holmberg's Generalisation and other successive cyclicity effects (Fox & Pesetsky 2005)linearise without atomise-and scrambling atomise without linearise. 2 Background 2.1 Linear Correspondence Axiom (LCA) X-bar theory offers no mechanical algorithm to map hierarchical structure to the surface linear form of language. Any pair of sisters can be stipulated to appear in either order. In his 1994 monograph The Antisymmetry of Syntax, Richard Kayne proposed that linear order is in fact derivable from hierarchical structure. He showed that it was possible to derive X-bar assumptions from c-command relations. In particular, he proposed the Linear Correspondence Axiom (LCA): Linear Correspondence Axiom (Kayne 1994): For any pair of nonterminal nodes < X, Y >, if X asymmetrically c-commands Y then each terminal node dominated by X preceded each terminal node dominated by Y. Moreover, the set of all such correspondences constitutes a total ordering on the terminal nodes. Kayne assumes irreflexive dominance and that the terminal nodes (e.g. lexical items) project up to a syntactic head without branching. At least one of these two assumptions is necessary to derive a total ordering on the terminal nodes. Nunes and Uriagereka (2000) propose that the Minimalist assumption of Bare Phrase Structure is correct (i.e. terminal nodes do not project up to a syntactic head without branching) and that the LCA is simpler than Kayne's statement. In particular, they remove the notion of dominance from the definition of the LCA. Linear Correspondence Axiom (Nunes & Uriagereka 2000): A Lexical Item α precedes a Lexical Item β iff α asymmetrically c-commands β. The removal of dominance from the definition of the LCA means that Nunes & Uriagereka are no longer interested in anything but terminal nodes. This is in contrast to Kayne, who uses mathematical relations among non-terminals to determine linear order of terminals. Nunes & Uriagereka determine linear order of terminals directly from the mathematical relations among the terminals themselves. We will see how this works in more detail in section 2.3 below. 2.2 Fox & Pesetsky Fox & Pesetsky (2003, 2005) propose that Spellout fixes the relative order of the lexical items in a Spelled-out domain. At the end of the construction of each Spellout Domain (SD) D i ,
AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality. While AM dependency parsers have been shown to be fast and accurate across several graphbanks, they require explicit annotations of the compositional tree structures for training. In the past, these were obtained using complex graphbankspecific heuristics written by experts. Here we show how they can instead be trained directly on the graphs with a neural latent-variable model, drastically reducing the amount and complexity of manual heuristics. We demonstrate that our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training, greatly facilitating the use of AM dependency parsing for new sembanks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.