Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1185
|View full text |Cite
|
Sign up to set email alerts
|

CU-NLP at SemEval-2016 Task 8: AMR Parsing using LSTM-based Recurrent Neural Networks

Abstract: We describe the system used in our participation in the AMR Parsing task for SemEval-2016. Our parser does not rely on a syntactic pre-parse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for estimating AMR graph structure.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…6.2.7 CU-NLP (Foland and Martin, 2016) This parser does not rely on a syntactic pre-parse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for estimating AMR graph structure.…”
Section: Uofrmentioning
confidence: 99%
“…6.2.7 CU-NLP (Foland and Martin, 2016) This parser does not rely on a syntactic pre-parse, or heavily engineered features, and uses five recurrent neural networks as the key architectural components for estimating AMR graph structure.…”
Section: Uofrmentioning
confidence: 99%
“…More recently, Foland and Martin (2016; describe a neural network based model that decomposes the AMR parsing task into a series of subproblems. Their system first identifies the concepts using a Bidirectional LSTM Recurrent Neural Network (Hochreiter and Schmidhuber, 1997), and then locates and labels the arguments and attributes for each predicate, and finally constructs the AMR using the concepts and relations identified in previous steps.…”
Section: Related Workmentioning
confidence: 99%
“…While it is trivial to categorize the PREDICATE, NON-PREDICATE, CONST cases, there is no straightforward way to deal with the MULTICONCEPT type. Foland and Martin (2016) only handle named entities, which constitute the Based on the observation that many of the MULTICONCEPT cases are actually similarly structured subgraphs that only differ in the lexical items, we choose to factor the lexical items out of the subgraph fragments and use the skeletal structure as the fine-grained labels, which we refer as Factored Concept Label (FCL). Figure 4 shows that although English words "visitor" and "worker" have been aligned to different subgraph fragments, after replacing the lexical items, in this case the leaf concepts visit-01 and work-01 with a placeholder "x", we are able to arrive at the same FCL.…”
Section: Factored Concept Labelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this trend, the task of AMR has been held continuously for two years in SemEval-2016 andSemEval-2017. There have been many parsers shown its outstanding performance for high Fscore points like RIGA (Barzdins and Gosko, 2016), CAMR, CU-NLP (Foland and Martin, 2016), etc.…”
Section: Introductionmentioning
confidence: 99%