Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2016
DOI: 10.18653/v1/p16-1001
|View full text |Cite
|
Sign up to set email alerts
|

Noise reduction and targeted exploration in imitation learning for Abstract Meaning Representation parsing

Abstract: Semantic parsers map natural language statements into meaning representations, and must abstract over syntactic phenomena, resolve anaphora, and identify word senses to eliminate ambiguous interpretations. Abstract meaning representation (AMR) is a recent example of one such semantic formalism which, similar to a dependency parse, utilizes a graph to represent relationships between concepts (Banarescu et al., 2013). As with dependency parsing, transition-based approaches are a common approach to this problem. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0
2

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 47 publications
(40 citation statements)
references
References 23 publications
(43 reference statements)
0
38
0
2
Order By: Relevance
“…Another line of work trains machine translation models to convert strings into linearized AMRs (Barzdins and Gosko, 2016;Peng et al, 2017b;Konstas et al, 2017;Buys and Blunsom, 2017b). Transition-based AMR parsers either use dependency trees as pre-processing, then mapping them into AMRs (Wang et al, 2015a(Wang et al, ,b, 2016Goodman et al, 2016), or use a transition system tailored to AMR parsing (Damonte et al, 2017;Ballesteros and Al-Onaizan, 2017). We differ from the above approaches in addressing AMR parsing using the same general DAG parser used for other schemes.…”
Section: Tackled Parsing Tasksmentioning
confidence: 99%
“…Another line of work trains machine translation models to convert strings into linearized AMRs (Barzdins and Gosko, 2016;Peng et al, 2017b;Konstas et al, 2017;Buys and Blunsom, 2017b). Transition-based AMR parsers either use dependency trees as pre-processing, then mapping them into AMRs (Wang et al, 2015a(Wang et al, ,b, 2016Goodman et al, 2016), or use a transition system tailored to AMR parsing (Damonte et al, 2017;Ballesteros and Al-Onaizan, 2017). We differ from the above approaches in addressing AMR parsing using the same general DAG parser used for other schemes.…”
Section: Tackled Parsing Tasksmentioning
confidence: 99%
“…Transition-based techniques are a natural starting point for UCCA parsing, given the conceptual similarity of UCCA's distinctions, centered around predicate-argument structures, to distinctions expressed by dependency schemes, and the achievements of transition-based methods in dependency parsing (Dyer et al, 2015;Andor et al, 2016;Kiperwasser and Goldberg, 2016). We are further motivated by the strength of transition-based methods in related tasks, including dependency graph parsing (Sagae and Tsujii, 2008;Ribeyre et al, 2014;Tokgöz and Eryigit, 2015), constituency parsing (Sagae and Lavie, 2005;Zhang and Clark, 2009;Zhu et al, 2013;Maier, 2015;Maier and Lichte, 2016), AMR parsing (Wang et al, 2015a(Wang et al, ,b, 2016Misra and Artzi, 2016;Goodman et al, 2016;Zhou et al, 2016;Damonte et al, 2017) and CCG parsing (Zhang and Clark, 2011;Ambati et al, 2015Ambati et al, , 2016.…”
Section: Introductionmentioning
confidence: 99%
“…The baselines were summarized as follows. (1) On RACE data set, six baselines were employed, including three introduced in the release of the data set, that is Sliding Window (Richardson et al, 2013), Stanford AR , and GA (Dhingra et al, 2016); another three methods proposed recently, namely DFN (Xu et al, 2017), BiAttention 250d MRU (Tay et al, 2018), and OFT (Radford et al, 2018). (2) For MCTest data set, nine baselines were investigated, involving four on lexical matching, i.e.…”
Section: Comparisons With the State-of-the-artsmentioning
confidence: 99%