Proceedings of the 19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology 2022
DOI: 10.18653/v1/2022.sigmorphon-1.24
|View full text |Cite
|
Sign up to set email alerts
|

HeiMorph at SIGMORPHON 2022 Shared Task on Morphological Acquisition Trajectories

Abstract: This paper presents the submission by the HeiMorph team to the SIGMORPHON 2022 task 2 of Morphological Acquisition Trajectories. Across all experimental conditions, we have found no evidence for the so-called Ushaped development trajectory. Our submitted systems achieve an average test accuracies of 55.5% on Arabic, 67% on German and 73.38% on English. We found that, bigram hallucination provides better inferences only for English and Arabic and only when the number of hallucinations remains low.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…For instance, the SIGMORPHON 2021 shared Task 0 Part 2 was to predict the judgement ratings of wug words (Calderone et al, 2021) as opposed to using real words held-out from the training data as test data. Similarly, the SIGMORPHON 2022 challenge involved computational modeling of the data drawn from corpora of child-directed speech and evaluation on children's learning trajectories and erroneous productions (Kodner and Khalifa, 2022;Kakolu Ramarao et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…For instance, the SIGMORPHON 2021 shared Task 0 Part 2 was to predict the judgement ratings of wug words (Calderone et al, 2021) as opposed to using real words held-out from the training data as test data. Similarly, the SIGMORPHON 2022 challenge involved computational modeling of the data drawn from corpora of child-directed speech and evaluation on children's learning trajectories and erroneous productions (Kodner and Khalifa, 2022;Kakolu Ramarao et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…As anticipated, participants' models achieved lowest scores on the task of Arabic nouns plural inflection. Kakolu Ramarao et al (2022) employed a vanilla transformer architecture that takes as an input the individual characters of the source lemma, morpho-syntactic tag of the input, and morpho-syntactic tag of the output. To account of the problem of data sparsity, they use a data hallucination technique based on alignment and replacement steps similar to that of Anastasopoulos and Neubig (2019b).…”
Section: Related Workmentioning
confidence: 99%
“…They implement true mini-batch training for a substantial speed up, rendering the system more practical on larger training sets. HeiMorph (Ramarao et al, 2022): The team from Heinrich-Heine-Universität Düsseldorf developed a system with a self-attention Transformer architecture with bigram hallucination. Submitted models were trained on the enriched data setsthat include either 1,000 or 10,000 bigram-aware hallucinated word pairs, generated separately for each training set size.…”
Section: Systemsmentioning
confidence: 99%