In this study, the authors evaluated the strengths and limitations of a self‐assessment based on ACTFL Can‐Do statements (ACTFL, ) as a tool for measuring linguistic gains over an internship abroad in Russia. They assessed its reliability, determined how its items mapped with the ACTFL scale, and measured the degree to which students' self‐evaluations matched oral proficiency interview (OPI) test results (i.e., predictive validity). Data revealed a high level of reliability. Furthermore, self‐assessment items ascended in the order of difficulty expected (i.e., Superior items were the most difficult, followed by Advanced), but differences between the means for items representing the ACTFL levels were not statistically significant. Finally, while students demonstrated significant gains from pre‐ to posttests on both the OPI and the self‐assessment, correlations between these measures were only moderate.
Several studies suggest that interteaching improves student learning more than traditional lectures, but few have examined which components of interteaching contribute to its efficacy. We examined whether the lecture component of interteaching affected students' exam grades and cumulative point totals in a research methods course. Although students who received lectures had consistently higher exam scores than students who did not, the differences were statistically significant on only 2 of 5 exams. Students who received lectures, however, earned significantly more points during the semester.
The validation of ability scales describing multidimensional skills is always challenging, but not impossible. This study applies a multistage, criterion‐referenced approach that uses a framework of aligned texts and reading tasks to explore the validity of the ACTFL and related reading proficiency guidelines. Rasch measurement and statistical analyses of data generated in three separate language studies confirm a significant difference in reading difficulty between the proficiency levels tested.
While studies have been done to rate the validity and reliability of the Oral Proficiency Interview (OPI) and Oral Proficiency Interview–Computer (OPIc) independently, a limited amount of research has analyzed the interexam reliability of these tests, and studies have yet to be conducted comparing the results of Spanish language learners who take both exams. For this study, 154 Spanish language learners of various proficiency levels were divided into two groups and administered both the OPI and OPIc within a 2‐week period using a counterbalanced design. In addition, study participants took both a pre‐ and postsurvey that gathered data about their language learning background, familiarity with the OPI and OPIc, preparation and test‐taking strategies, and evaluations of each exam. The researchers found that 54.5% of the participants received the same rating on the OPI and OPIc, with 13.6% of examinees scoring higher on the OPI and 31.8% scoring higher on the OPIc. While the results found that students scored significantly better on the OPIc, the overall effect size was quite small. The authors also found that the overwhelming majority of the participants preferred the OPI to the OPIc. This research begins to fill important gaps and provides empirical data to examine the comparability of the Spanish OPI and OPIc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.