Recent years have witnessed a significant increase in the interest in international assessments of student performance. In such assessments it is mandatory that all the different-language texts be equivalent to each other, that is, equally difficult to understand. The article summarizes a study made on the topic, examining the problems of equivalence encountered when translating texts in international reading literacy assessments. In the study, three English and Finnish texts used in the PISA 2000 reading test were compared text-analytically. The analysis revealed six different types of problem, which, moreover, differed considerably between the three texts. As a result of the problems, none of the three text pairs were fully equivalent in difficulty. The study suggests that it will probably never be possible to attain full equivalence of difficulty in international reading literacy studies; however, by developing the translation work, a relatively high level of equivalence (and validity) seems attainable.
The article reviews research and findings on problems and issues faced when translating international academic achievement tests. The purpose is to draw attention to the problems, to help to develop the procedures followed when translating the tests, and to provide suggestions for further research. The problems concentrate on the following: the unique and demanding purpose of the translation task, the partly contradictory task specifications and translation instructions, the indecision as to whether to produce one or two target versions, the indecision as to whether to use one or two source versions, inadequate revision and verification, deficient translator competences, and a lack of time. To solve the problems, the article suggests the following: ensuring that the translation guidelines provide a right, unequivocal, and balanced picture of the purpose of the translation task; ensuring the equivalence of the two source versions; putting more emphasis on revision, and ensuring that the verification is sufficiently thorough; using only qualified translators, providing them with training in test translation, and including also subject matter and testing specialists in the translation teams; and allotting sufficient time to the translation work. However, the main lesson from the review is that more research in the field is badly needed.
In international achievement studies, a common test is typically used which is translated into the languages of the participating countries. For the test to be valid, all the translations and different-language test versions need to be equally difficult to read and answer. An underestimated and underdiscussed threat to this validity is unwanted literal translation. This paper discusses the problem of unwanted literal translation in international achievement studies. It defines what is meant by unwanted literal translation and explains why it is a threat to the validity of international achievement studies and why it is so difficult to avoid. It also discusses problems there have been when translating these tests which may have promoted unwanted literal translation and provides suggestions on how to improve the translation practices so as to ensure that the translations are in as natural and idiomatic language as possible.
In international education studies, the different-language test versions need to be equally difficult to read and answer for the test to be valid. To ensure comparability, several quality control procedures have been developed. Among these, surprisingly little attention has been paid to judgmental reviews and their ability to identify language-related sources of bias. Also, the reviews have often failed in identifying biases. This paper explored whether it is possible to improve the ability of judgmental reviews to identify language-related sources of bias. A new review was made of two Finnish items which in the PISA (Programme for International Student Assessment) 2000 reading test showed differential item functioning but for which no clear language-related explanations were found in the review in 2000. The items were compared systematically, at all linguistic levels, to the corresponding items in the English and French source versions, at the same time taking into account the cognitive processes required to answer them and students' written responses to them. Language-related explanations were found for both items which may have led to differences in performance, suggesting that it is possible to make judgmental reviews better able to identify language-related bias. Suggestions are given on how to do this.
Open-ended (OE) items are widely used to gather data on student performance in international achievement studies. However, several factors may threaten validity when using such items. This study examined Finnish coders' opinions about threats to validity when coding responses to OE items in the PISA 2012 problem-solving test. Six discussions during six coder practice sessions (on six OE items) and an interview between five coders were audiorecorded and analyzed by means of content analysis. Three main threats to validity were found: (1) unclear and complex questions; (2) arbitrary and illogical coding rubrics; and (3) unclear and ambiguous responses. Suggestions are given as to how to respond to these threats in order to improve the validity of international achievement studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.