Despite growing international interest in the use of data to improve education, few studies examining the effects on student achievement are yet available. In the present study, the effects of a two-year data-based decisionmaking intervention on student achievement growth were investigated. Fifty-three primary schools participated in the project, and student achievement data were collected over the two years before and two years during the intervention. Linear mixed models were used to analyze the differential effect of data use on student achievement. A positive mean intervention effect was estimated, with an average effect of approximately one extra month of schooling. Furthermore, the results suggest that the intervention especially
Data literacy is a prerequisite for making data-based decisions. This paper focuses on the extent to which educators develop components of data literacy during a 1-year data use intervention, as well as what they learn and struggle with concerning these data literacy components. In the data use intervention, teams of teachers, school leaders and a data expert use data to solve an educational problem at their school. We employed a mixedmethods approach, combining data from a pre-and post-test data literacy test (N = 27), interviews (N = 12), evaluations of meetings (N = 33), and logbooks. Findings show that educators' data literacy increased significantly. Participants and the data coach indicated that educators had learned, for example, to analyze data with Excel, and to refute misconceptions. Still, there is room for further improvement. For example, educators struggled with formulating a data use purpose that is plausible, sufficiently concrete and measurable.
h i g h l i g h t s Identifies top five classroom assessments teachers initiate in the classroom. Teachers conduct peer and self-assessment in only 10%e25% of their lessons. Teachers use data for instruction in only 25%e50% of their lessons. Identifies top five prerequisites teachers consider important for AfL and DBDM. Highlights the need for professional development for teachers in AfL and DBDM.
Providing differentiated instruction (DI) is considered an important but complex teaching skill which many teachers have not mastered and feel unprepared for. In order to design professional development activities, a thorough description of DI is required. The international literature on assessing teachers' differentiation qualities describes the use of various instruments, ranging from self-reports to observation schemes and from perceived-difficulty instruments to student questionnaires. We question whether these instruments truly capture the complexity of differentiation. In order to depict this complexity, a cognitive task analysis (CTA) of the differentiation skill was performed. The resulting differentiation skill hierarchy is presented here, together with the knowledge required for differentiation, and the factors influencing its complexity. Based on the insights of this CTA, professional development trajectories can be designed and a comprehensive assessment instrument can be developed, enabling researchers and practitioners to train, assess, and monitor teaching quality with respect to providing differentiated instruction.
School quality care has become important in many Western countries. Expectations are high, but little is known about the nature and extent of the use of self-evaluation instruments within schools. From this longitudinal study into the use of a Dutch school self-evaluation instrument, it became clear that schools vary in the extent to which they are able to make use of self-evaluation results. A minority of schools in this study were able to use the self-evaluation results for developing measures at the school and classroom level to improve the quality of education. Potential causes for the findings and alternatives for promoting the utilisation of school self-evaluation instruments are discussed.
Introduction and research questionQuality care, performance feedback and school self-evaluation are important themes in current educational policy-making, and are also receiving increased attention in research. Although enormous resources are invested to develop and implement school self-evaluation instruments, how schools actually use these instruments has never been thoroughly evaluated longitudinally (Coe and Visscher 2002a). As such a profound evaluation is lacking. Moreover, several studies report a lack of effect of school self-evaluation feedback, but this lack of effect may be caused by a lack of use of school self-evaluation feedback. Therefore, the research question underlying this article is as follows:
School effectiveness research (SER) has flourished since the 1980s. In recent years, however, various authors have criticised several aspects of SER. A thorough review of recent criticism can serve as a good starting point for addressing the flaws of SER, where appropriate, thereby supporting its further development. This article begins by reviewing the criticism from different perspectives by discussing the political-ideological nature of SER, its theoretical limitations and the research methodology it applies. The review of each type of criticism is accompanied by a review of the recommendations that the critics propose for improving SER. We then proceed to present our views on each line of criticism and propose 5 avenues that we consider promising for the further development of SER.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.