2020
DOI: 10.1109/access.2020.3004879
|View full text |Cite
|
Sign up to set email alerts
|

Ancient Korean Neural Machine Translation

Abstract: Translation of the languages of ancient times can serve as a source for the content of various digital media and can be helpful in various fields such as natural phenomena, medicine, and science. Owing to these needs, there has been a global movement to translate ancient languages, but expert minds are required for this purpose. It is difficult to train language experts, and more importantly, manual translation is a slow process. Consequently, the recovery of ancient characters using machine translation has be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(15 citation statements)
references
References 17 publications
0
10
0
Order By: Relevance
“…Therefore, we direct our efforts toward three particular eras: Pre-Qin (先秦), Han (汉), and Song (宋) to verify the hypothesis that the chronology of a text directly influences the word meaning and model performance. In particular, Pre-Qin and Han are closer chronologically, so we expect their model performances to be closer than that between Pre-Qin and Song, as was shown in other ancient text translation (Park et al, 2020). One reason for this difference is the use of polysemous single-character words, which are highly ambiguous.…”
Section: Introductionmentioning
confidence: 68%
“…Therefore, we direct our efforts toward three particular eras: Pre-Qin (先秦), Han (汉), and Song (宋) to verify the hypothesis that the chronology of a text directly influences the word meaning and model performance. In particular, Pre-Qin and Han are closer chronologically, so we expect their model performances to be closer than that between Pre-Qin and Song, as was shown in other ancient text translation (Park et al, 2020). One reason for this difference is the use of polysemous single-character words, which are highly ambiguous.…”
Section: Introductionmentioning
confidence: 68%
“…This research study employed 21,000-subword vocabularies for Levantine vernacular (LEV)—MSA translation task, 21,000 subword vocabularies for Maghrebi vernacular (MAG)–MSA translation task, 21,000 subword vocabularies for the Nile Basin Arabic (NB)—MSA translation task, 21,000 subword vocabularies for the Gulf Arabic (Gulf)—MSA translation task and 9235 subword vocabularies for the Iraqi Arabic (IRQ)—MSA translation task. Twenty-nine thousand five hundred subword vocabularies are employed for the corpus applied by Baniata et al [ 24 ] on the Levantine Arabic (LEV)–MSA translation task and on the Maghrebi vernacular (MAG)–MSA translation task. Relu dropout value and attention dropout value are 0.1.…”
Section: Resultsmentioning
confidence: 99%
“…These experimental findings showed that the usage of UTagger and RDRsegmenter in the Korean–Vietnamese NMT system might increase its performance, obtaining exceptional outcomes from Korean to Vietnamese with BLEU score of 27.79 and TER score of 58.77 and in reverse way a BLEU score of 25.44 and TER score of 58.72. Park et al [ 24 ] proposed the first ancient Korean NMT system based on the use of a Transformer. The method improves translator performance by instantly generating a draft translation for different ancient documents that remain untranslated.…”
Section: Related Workmentioning
confidence: 99%
“…It is a translation method based on a parallel corpus of source and target languages [5]. Aiming at the locality problem of statistical machine translation, the architecture of neural machine translation adopts the overall sentence-to-sentence translation, which can obtain longer-distance dependencies as much as possible; for the problem of the complicated process of statistical machine translation and numerous functional components, the neural machine translation system is single in the structure; that is, a complete neural network structure can complete the entire translation process and does not need to consider the cooperation and coordination between various components like statistical machine translation [6,7]. Because neural networks can automatically capture useful features in data, neural machine translation also avoids complicated manual design features.…”
Section: Related Workmentioning
confidence: 99%