Recent research in reading comprehension has focused on the processes of reading, while recent thinking in language testing has recognized the impor tance of gathering information on test taking processes as part of construct validation. And while there is a growing body of research on test-taking strate gies in language testing, as well as research into the relationship between item content and item performance, no research to date has attempted to examine the relationships among all three - test taking strategies, item content and item performance. This study thus serves as a methodological exploration in the use of information from both think-aloud protocols and more commonly used types of information on test content and test performance in the investigation of construct validity.
The evaluation of writing ability among both L1 and L2 students has become increasingly important in recent years because the results of such evaluation are used for a variety of administrative, instructional, and research purposes. Classroom teachers, in particular, because they want to use these results to help improve, influence, refine, and shape their students' attained writing ability, have specific concerns regarding the various methods of writing assessment available. Additional concerns, especially of teachers, include the issue of whether the results of any one evaluation procedure are helpful to students and how they affect students' writing performance and attitudes.
This article does not address the question of how best to use the results of a writing evaluation to improve student writing, but it does present a discussion of the efficacy of various direct and indirect methods of assessing writing ability. The purpose of this article is to outline the assumptions, procedures, and consequences for use of the principal direct scoring schemes and objective measures presently available for evaluating compositions, and of the objective tests currently utilized to determine (indirectly) students' ability to write.
It is hoped that ESL professionals (whether they are teachers, administrators, or researchers) can make use of this discussion as they try to decide how best to handle questions related to writing ability. Although some readers may question the appropriateness of using indirect measures, such as standardized tests, since those only measure recognition of surface‐level syntactic phenomena, the more widely used standardized tests have been included in this survey because they are used by a number of institutions as criterion measures as well as for placement and prediction.
Studies have shown that prequestions-asking students questions before they learn something-benefit memory retention. Prequestions would seem to be a useful technique for enhancing students' learning in their courses, but classroom investigations of prequestions have been sparse. In the current study, students from an introductory psychology course were randomly assigned to receive prequestions over each upcoming lesson (prequestion group) or to not receive prequestions (control group). At the end of class, students in the prequestion group remembered the material better than students in the control group, but this benefit was specific to the information that was asked about in the prequestions. When memory for other, nonprequestioned portions of the lesson were tested at the end of class, the prequestion group performed similarly to the control group. On a follow-up quiz 1 week later, both groups showed a memory advantage for material that was tested at the end of class 1 week prior, compared with information from the same lesson that was never tested. However, this benefit was comparable between the prequestion group and the control group, suggesting that students benefit from retrieval practice, but prequestions add little, if any, enhancement to this effect. (PsycINFO Database Record
Self‐reported attitude data over the last decade have often been used as predictors of attained second language proficiency and for other purposes. Three possible sources of non‐random but extraneous vairance in self‐reported attitude data are considered: self‐flattery, response set, and the approval motive. It is suggested that some of the variance in verbal intelligence is common to variance in first and second language proficiency, some of which in turn may be common to the kinds of non‐random sources of variance in self‐reported data which are listed above. It is demonstrated that it is possible that self‐reported attitude measures may be surreptitious measures of verbal intelligence and/or language proficiency. To the extent that extraneous non‐random sources of variance in self‐reported attitude measures can be shown to exist, the measures must be assumed to be non‐valid.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.