Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017) 2017
DOI: 10.18653/v1/s17-2159
|View full text |Cite
|
Sign up to set email alerts
|

RIGOTRIO at SemEval-2017 Task 9: Combining Machine Learning and Grammar Engineering for AMR Parsing and Generation

Abstract: By addressing both text-to-AMR parsing and AMR-to-text generation, SemEval-2017 Task 9 established AMR as a powerful semantic interlingua. We strengthen the interlingual aspect of AMR by applying the multilingual Grammatical Framework (GF) for AMR-to-text generation. Our current rule-based GF approach completely covered only 12.3% of the test AMRs, therefore we combined it with state-of-the-art JAMR Generator to see if the combination increases or decreases the overall performance. The combined system achieved… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 8 publications
0
11
0
Order By: Relevance
“…The participants declined to submit a new system description paper. 4.1.5 RIGOTRIO (Gruzitis et al, 2017) This team extended their CAMR-based AMR parser from last year's shared task (Barzdins and Gosko, 2016) with a gazetteer for recognizing as named entities the biomedical compounds frequently mentioned in the biomedical texts. The gazetteer was populated from the provided biomedical AMR training data.…”
Section: Cmumentioning
confidence: 99%
“…The participants declined to submit a new system description paper. 4.1.5 RIGOTRIO (Gruzitis et al, 2017) This team extended their CAMR-based AMR parser from last year's shared task (Barzdins and Gosko, 2016) with a gazetteer for recognizing as named entities the biomedical compounds frequently mentioned in the biomedical texts. The gazetteer was populated from the provided biomedical AMR training data.…”
Section: Cmumentioning
confidence: 99%
“…Details on the training cycle can be found in the Supplemental Material §A (the loss is de-DynamicPower (Butler, 2016), TMF (Bjerva et al, 2016), UCL+Sheffield (Goodman et al, 2016) and CU-NLP (Foland and Martin, 2016). 7 TMF-1 and TMF-2 (van Noord and Bos, 2017a), DAN-GNT (Nguyen and Nguyen, 2017), Oxford (Buys and Blunsom, 2017), RIGOTRIO (Gruzitis et al, 2017) and JAMR (Flanigan et al, 2016) 8 https://spacy.io/ scribed in §4). We use the same single (hierarchical) model for all three evaluation studies, proving its applicability across different scenarios (a nonhierarchical model is only instantiated for the ablation experiments in Section §5.4).…”
Section: Methodsmentioning
confidence: 99%
“…The raw data for the synsets comes from existing resources such as Princeton WordNet for English (Fellbaum 1998), Svenska OrdNät (Viberg et al 2002) and WordNet-SALDO Following a similar task in text-to-AMR parsing (May 2016), a recent shared task at SemEval 2017 unveiled the state-of-the-art in AMR-to-text generation (May and Priyadarshi 2017). According to the SemEval 2017 Task 9 evaluation, the convincingly best-performing AMR-to-text generation system (Gruzitis, Gosko, and Barzdins 2017), among the contestants, combines a GF-based generator with the JAMR generator (Flanigan et al 2016), achieving the Trueskill score (human evaluation) of 1.03-1.07 and the BLEU score (automatic evaluation) of 18.82 (May and Priyadarshi 2017). The JAMR generator alone placed second, achieving the Trueskill score of 0.82-0.85 and the BLEU score of 19.01 (May and Priyadarshi 2017).…”
Section: Introductionmentioning
confidence: 99%
“…By adding more and more AMR-to-AST conversion rules and increasing the proportion of full GF-generated linearizations, the BLEU score would decrease even more. Figure 20 illustrates the difference between the GF-based AMR-to-text generator (Gruzitis, Gosko, and Barzdins 2017) and the JAMR generator (Flanigan et al 2016), given a sample input AMR that the current AMR-to-AST conversion rule set fully covers. Note that the reference is not the original sentence from which the AMR was specified by a human annotator; it is an informed human translation by the authors.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation