Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1176
|View full text |Cite
|
Sign up to set email alerts
|

RIGA at SemEval-2016 Task 8: Impact of Smatch Extensions and Character-Level Neural Translation on AMR Parsing Accuracy

Abstract: Two extensions to the AMR smatch scoring script are presented. The first extension combines the smatch scoring script with the C6.0 rule-based classifier to produce a human-readable report on the error patterns frequency observed in the scored AMR graphs. This first extension results in 4% gain over the state-of-art CAMR baseline parser by adding to it a manually crafted wrapper fixing the identified CAMR parser errors. The second extension combines a per-sentence smatch with an ensemble method for selecting t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
98
2

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 51 publications
(105 citation statements)
references
References 8 publications
(7 reference statements)
0
98
2
Order By: Relevance
“…The Oxford system appears to be quite different from last year's neural submission (Foland and Martin, 2016) but nevertheless is a strong competitor. Finally, the top-scoring system, that of UIT-DANGNT-CLNLP, got a 0.61 Smatch, while last year's top scoring systems (Barzdins and Gosko, 2016;Wang et al, 2016) scored a 0.62, practically the same score. This, despite the fact that the evaluation corpora were quite different.…”
Section: Discussionmentioning
confidence: 94%
“…The Oxford system appears to be quite different from last year's neural submission (Foland and Martin, 2016) but nevertheless is a strong competitor. Finally, the top-scoring system, that of UIT-DANGNT-CLNLP, got a 0.61 Smatch, while last year's top scoring systems (Barzdins and Gosko, 2016;Wang et al, 2016) scored a 0.62, practically the same score. This, despite the fact that the evaluation corpora were quite different.…”
Section: Discussionmentioning
confidence: 94%
“…Robust machine learning techniques are necessary instead to map the arbitrary input sentences to their meaning representation in terms of PropBank and FrameNet [7], or the emerging Abstract Meaning Representation, AMR [8], which is based on PropBank with named entity recognition and linking via DBpedia [9]. AMR parsing has reached 67% accuracy (the F 1 score) on opendomain texts, which is a level acceptable for automatic summarization [10].…”
mentioning
confidence: 99%
“…Consequently most AMR parsers are pipelines that make extensive use of additional resources. Neural encoder-decoders have previously been proposed for AMR parsing, but reported accuracies are well below the state-of-the-art (Barzdins and Gosko, 2016), even with sophisticated pre-processing and categorization (Peng et al, 2017). The end-to-end neural approach contrasts with approaches based on a pipeline of multiple LSTMs (Foland Jr and Martin, 2016) or neural network classifiers inside a feature-and resource-rich parser (Damonte et al, 2017), which have performed competitively.…”
Section: Introductionmentioning
confidence: 99%