Proceedings of the Seventh Named Entities Workshop 2018
DOI: 10.18653/v1/w18-2412
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Assorted Models for Transliteration

Abstract: We report the results of our experiments in the context of the NEWS 2018 Shared Task on Transliteration. We focus on the comparison of several diverse systems, including three neural MT models. A combination of discriminative, generative, and neural models obtains the best results on the development sets. We also put forward ideas for improving the shared task.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 6 publications
0
7
0
Order By: Relevance
“…Table 2 shows that DTLM outperforms the other systems by a large margin thanks to its ability to leverage a target word list. Additional results are reported by Najafi et al (2018b).…”
Section: Systemmentioning
confidence: 63%
“…Table 2 shows that DTLM outperforms the other systems by a large margin thanks to its ability to leverage a target word list. Additional results are reported by Najafi et al (2018b).…”
Section: Systemmentioning
confidence: 63%
“…http://www.speech.cs.cmu.edu/SLM/toolkit.html 9 DTLM was also succesfully used in the NEWS 2018 shared task on transliteration(Najafi et al, 2018b).…”
mentioning
confidence: 99%
“…In the CoNLL-SIGMORPHON 2018 Shared Task on Universal Morphological Reinflection (Cotterell et al, 2018), DTLM was our best performing individual system. It was also successfully used in the NEWS 2018 shared task on transliteration (Najafi et al, 2018b).…”
Section: Dtlmmentioning
confidence: 99%