With the advent of the digital revolution, language testers have endeavored to utilize state-of-the-art computer technology to satisfy the ever-growing need for a tool to measure English communication skills with maximal accuracy and efficiency. Thanks to the concerted efforts made by experts in such fields as computational linguistics, computer engineering, computer-assisted language learning, and psychometrics, language testers have recently succeeded in developing computer/web-based language tests. Among them are the TOEFL CBT by Educational Testing Service and CommuniCAT by the University of Cambridge Local Examinations Syndicate. As with the paper-based language test (PBLT), more rigorous research is now being conducted on the validity of computer-based language tests (CBLT) and computer adaptive language tests (CALT). Content analyses and comparability studies of PBLT and CBLT/CALT are prerequisites to such validation research. In this context, utilizing an EFL test battery entitled the Test of English Proficiency developed by Seoul National University (TEPS), the present study is aimed at addressing the issue of the comparability between PBLT and CBLT based on content and construct validation employing content analyses based on corpus linguistic techniques in addition to such statistical analyses as correlational analyses, ANOVA, and confirmatory factor analyses. The findings support comparability between the CBLT version and the PBLT version of the TEPS subtests (listening comprehension, grammar, vocabulary, and reading comprehension) in question.
Although the use of computerized assessment tools in educational and psychological settings has increased dramatically in recent years, limited information is available about the properties of computerized self-concept measures. The authors evaluated the characteristics of computerized and paper-and-pencil versions of the Rosenberg Self-Esteem Scale (SES)—one of the most widely used self-concept measures in educational and psychological research. Results showed that administration mode (computerized versus paper and pencil) had little effect on the psychometric properties of the SES (i.e., score magnitude, variability, and factor structure) but that the computerized version took longer and was preferred by examinees. With the exception of administration time, these results support the use of the computerized SES and its comparability to the paper-and-pencil version.
The purposes of this study were to assess the comparability of scores obtained from computer and paper-and-pencil versions of the Iowa Tests of Educational Development and to evaluate examinees' attitudes about multiple aspects of test administration in the two modes. Findings supported the comparability of scores across administration modes with regard to scaling (means and standard deviations), internal consistency, and criterion- and construct-related validity. Overall, examinees preferred taking the computerized tests and valued many operational features of those tests. The least favorable attitudes were reported for the literary skills tests and their scrollable reading passages in particular.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.