Despite fast development of machine translation, the output quality is less than acceptable in certain language pairs. The aim of this paper is to determine the types of errors in machine translation output that cause comprehension problems to potential readers. The study is based on a reading task experiment using eye tracking and a retrospective survey as a complementary method to add more value to the research as eye tracking as a method is considered to be problematic and challenging (O’BRIEN, 2009; ALVES et al., 2009). The cognitive evaluation approach is used in an eye tracking experiment to determine the complexity of the errors in the English–Lithuanian language pair from easiest to hardest as seen by the readers of a machine-translated text. The tested parameters – gaze time and fixation count – demonstrate that a different amount of cognitive effort is required to process different types of errors in machine-translated texts. The current work aims at contributing to other research in the Translation Studies field by providing the analysis of error assessment of machine translation output.
For several decades, there has been a heated debate about the value of providing corrective feedback in writing assignments in English as a foreign language (EFL) classes. Despite the fact that corrective feedback in writing has been analysed from various angles, learners’ expectations regarding feedback given by language instructors are still to be considered, especially in different learning settings. Student attitudes have been found to be associated with motivation, proficiency, learner anxiety, autonomous learning, etc. (Elwood & Bode, 2014). Thus, the aim of this paper was to compare EFL learners’ attitudes towards corrective feedback and self-evaluation of writing skills in different learning settings. Students at two technological universities in France and Lithuania were surveyed and asked to complete an anonymous questionnaire combining the Likert scale and rank order questions. The results indicate that frequency of writing assignments seems to have little or no impact on students’ self-evaluation of writing skills. Moreover, although the two groups of students showed preference for feedback on different error types (e.g., feedback on structure vs. feedback on grammar), nevertheless, indirect corrective feedback with a clue was favoured by all the respondents.
Machine translation (MT) is still a huge challenge for both IT developers and users. From the beginning of machine translation, problems at the syntactic and semantic levels have been faced. Today despite progress in the development of MT, its systems still fail to recognise which synonym, collocation or word meaning should be used. Although mobile apps are very popular among users, errors in their translation output create misunderstandings. The paper deals with the analysis of machine translation of general everyday language in Lithuanian to English and English to Lithuanian language pairs. The results of the analysis show that more than two thirds of all the sentences were translated incorrectly, which means that there is a relatively small possibility that a mobile app will translate sentences correctly. The results are disappointing, because even after almost 70 years of MT research and improvement, researchers still cannot offer a system that would be able to translate with at least 50% correctness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.