The relations among students' motivational beliefs, cognitive processes, and academic achievement were investigated. A 51-item questionnaire together with a mathematics achievement test was administered to 459 fifth graders in Korean elementary school mathematics classrooms. Results indicated that, in general, students' cognitive processes related closely to competence beliefs, task values, and achievement goals, and more importantly their success or failure in mathematics achievement was closely linked to competence beliefs, performance-avoidance goals, and persistence strategies. Positive evidence of performance-approach goals was observed in math learning relative to task goals. As expected, performance-avoidance goals turned out to be detrimental to students' math learning. These findings are generally congruent with the motivational theories and support the position that students should be encouraged to adopt task goals and actively involve themselves in math class activities. However, it also behooves us to recognize the potential benefits of performance-approach goals in different cultural contexts, such as the Korean elementary school math classrooms.
Calibration and equating is the quintessential necessity for most large‐scale educational assessments. However, there are instances when no consideration is given to the equating process in terms of context and substantive realization, and the methods used in its execution. In the view of the authors, equating is not merely an exhibit of the statistical methodology, but it is also a reflection of the thought process undertaken in its execution. For example, there is hardly any discussion in literature of the ideological differences in the selection of an equating method. Furthermore, there is little evidence of modeling cohort growth through an identification and use of construct‐relevant linking items’ drift, using the common item nonequivalent group equating design. In this article, the authors philosophically justify the use of Huynh's statistical method for the identification of construct‐relevant outliers in the linking pool. The article also dispels the perception of scale instability associated with the inclusion of construct‐relevant outliers in the linking item pool and concludes that an appreciation of the rationale used in the selection of the equating method, together with the use of linking items in modeling cohort growth, can be beneficial to the practitioners.
Literature in the United States provides many examples of no difference in student achievement when measured against the mode of test administration i.e., paper‐pencil and online versions of the test. However, most of these researches centre on ‘regular’ students who do not require differential teaching methods or different evaluation processes and techniques. This research provides evidence that students who have learning disabilities, like their counterparts in the regular educational programme, do not lag behind in computer adaptation and use. The study, using differential item functioning analysis with an ‘external’ variable and an analysis of covariance, shows that items and tests can be created to have no practical differences in the mode of administration for this special group of students, and as such, is in keeping with the trend for using online testing with its many advantages (cost savings, flexibility in administration, etc.) in lieu of the paper and pencil version of the test.
English language learners (ELLs) are the fastest growing subgroup in American schools. These students, by a provision in the reauthorization of the Elementary and Secondary Education Act, are to be supported in their quest for language proficiency through the creation of systems that more effectively measure ELLs' progress across years. In the past, ELLs' progress has been based on students' prior scores measuring the same construct. To disentangle effectiveness from achievement, the reporting has generally targeted mean-group activity. In contrast, student growth percentiles (SGPs) provide a comparison of students' growth with others who have the same achievement score history. By examining the construct measured by an English language proficiency test as manifested in student scores in Speaking, Listening, Reading and Writing, this article outlines the use of SGPs in providing information on how much each student needs to grow, which will allow educators to more effectively apply differential formative instructional strategies.In recent years, the study of English in K-12 settings has taken center stage in the United States because of the growing numbers of English language learners (ELLs) enrolled in schools across the nation (Meyer, Madden, & McGrath, 2004; U.S. Government Accountability Office [GAO], 2006;Van Roekel, 2008). However, many of these students' academic performances fall well below those of their non-ELL peers, not because of the lack of academic achievement but because of the inadequacy of their English language skills (Abedi, 2008).Under the No Child Left Behind Act (NCLB) of 2001, each state is required to assess language proficiency via the four recognized English Language Proficiency (ELP) modalities (i.e., Speaking, Listening, Reading, and Writing). As McCarthy (1999) points out, the use of modality results is necessary because it helps educators demarcate the underlying reasons of student differential performances, look for parallels between the processes in the learning of each modality, and use the information constructively in a classroom setting.According to Abedi (2008), most states use a compensatory model for assessing students' language acquisition. Unlike conjunctive models where students must achieve "targets" in each of the modalities of ELP to be considered proficient, students assessed by the compensatory Correspondence should be addressed to
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.