The working environment of translators has changed significantly in recent decades, with post-editing (PE) emerging as a new trend in the human translation workflow, particularly following the advent of neural machine translation (NMT) and the improvement of the quality of the machine translation (MT) raw output especially at the level of fluency. In addition, the directionality axiom is increasingly being questioned with translators working from and into their first language both in the context of translation (Buchweitz and Alves 2006; Pavlović and Jensen 2009; Fonseca and Barbosa 2015; Hunziker Heeb 2015; Ferreira 2013, 2014; Ferreira et al. 2016; Feng 2017) and in the context of PE (Garcia 2011; Sánchez-Gijón and Torres-Hostench 2014; da Silva et al. 2017; Toledo Báez 2018). In this study we employ product- and process-oriented approaches to investigate directionality in PE in the English-Greek language pair. In particular, we compare the cognitive, temporal, and technical effort expended by translators for the full PE of NMT output in L1 (Greek) with the effort required for the full PE of NMT output in L2 (English), while we also analyze the quality of the final translation product. Our findings reveal that PE in L2, i.e., inverse PE, is less demanding than PE in L1, i.e., direct PE, in terms of the time and keystrokes required, and the cognitive load exerted on translators. Finally, our research shows that directionality does not imply differences in quality.
Due to the wide-spread development of Machine Translation (MT) systems-especially Neural Machine Translation (NMT) systems-MT evaluation, both automatic and human, has become more and more important as it helps us establish how MT systems perform. Yet, automatic evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU, METEOR and ROUGE) may correlate poorly with human judgments. This paper seeks to put to the test an evaluation model based on a novel deep learning schema (NoDeeLe) used to compare two NMT systems on four different text genres, i.e. medical, legal, marketing and literary in the English-Greek language pair. The model utilizes information from the source segments, the MT outputs and the reference translation, as well as the automatic metrics BLEU, METEOR and WER. The proposed schema achieves a strong correlation with human judgment (78% average accuracy for the four texts with the highest accuracy, i.e. 85%, observed in the case of the marketing text), while it outperforms classic machine learning algorithms and automatic metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.