Advent of technology has caused growing interest in using computers to convert conventional paper and pencil-based testing (Henceforth PPT) into Computer-based testing (Henceforth CBT) in the field of education during last decades. This constant promulgation of computers to reshape the conventional tests into computerized format permeated the language assessment field in recent years. But, enjoying advantages of computers in language assessment raise the concerns of the effects that computerized mode of testing may have on CBT performance. Thus, this study investigated the score comparability of Vocabulary in Use test taken by 30 Iranian undergraduate students studying at a state university located in Chabahar region of Iran (CMU) to see whether scores from two administrations of testing mode were different. Therefore, two similar tests were administered to the male and female participants on two testing mode occasions with four weeks interval. Employing One-Way ANOVA statistical test to compare the mean scores and Pearson Correlation test to find the relationship between mode preference and performance revealed that two sets of scores were not different and gender difference was not also considered a variable that might affect performance on CBT. Based on the results, computerized version of the test can be considered a favorable alternative for the state undergraduate students in Iran.
Achievement test scores are used to diagnose strengths, weaknesses, and a basis for awarding prizes, scholarship, or degrees. They are also used in evaluating the influences of course of study, teachers, teaching methods, and other factors considered to be significant in educational practice. Still, sometimes there is a gap in the score of essay tests and the existing knowledge of examinees. In the present study, the relationship between writing skill and the academic achievement of Iranian EFL students was examined to find a logical connection between them. The results of four final exams as content scores were examined and scored again in term of writing ability in analytical scoring scheme according to IELTS criteria. Then the average of two sets of scores calculated by two raters was compared with content scores of the same tests. The results showed that correlation between content score of all students and their writing skills is meaningful at 0.01 level of significance. The results showed that there is a strong relationship between EFL students' degree of content score and their writing skill.
The purpose of this study is to examine the score comparability of institutional English reading tests in two testing methods, i.e. paper-based and computer-based tests taken by Iranian EFL learners in four language institutes and their branches in Iran. In the present study, the researcher tried to examine whether there is any difference between computer-based test results (henceforth CBT) and paper-based test (PPT) results of a reading comprehension test as well as exploring the relationship between students' prior computer experience and their test performance in CBT. Two equivalent tests were administered to one group of EFL learners in two different occasions, one in paper-based format and the other in computer-based test. Utilizing t-test, the means of two modes have been compared and the results showed the priority of PPT over CBT with .01 degree of difference at p < 05. Using ANOVA, the findings revealed that computer experience had no significant influence on the students' performance in computerized test.
Abstract:There have been studies on comparability of test results in Computer-Based Testing (Henceforth CBT) and PaperBased Testing (Henceforth PBT) considering key factors associated with test results in different countries with different languages and technological backgrounds. The main purpose of the current study was to discover the equivalency of test scores on PBT and CBT in the English achievement test in Payame Noor University (PNU) among undergraduate students. It also intended to investigate if there was any relationship between computer attitude and testing performance on CBT. Based upon the quantitative and qualitative data, some major findings were revealed. Firstly, there was statistically significant difference between two sets of mean scores. Furthermore, based on descriptive results, in comparing the results of computerized and paper-based tests, students showed better performance on PBT than CBT. The results of this study support the necessity of doing comparability studies in higher educational contexts before substituting CBT for PBT or including it in the system. Then, computer attitude had not any interaction with testing performance on CBT among Iranian undergraduate students in PNU. Finally, the results of interview supported the quantitative findings, i.e. participants mostly showed high preference for computerized test and liked CBT more than PBT but due to some justifications and habit of taking tests traditionally, they performed better on PBT.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.