Value-added 'Progress' measures are to be introduced for all English schools in 2016 as 'headline' measures of school performance. This move comes despite research highlighting high levels of instability in value-added measures and concerns about the omission of contextual variables in the planned measure. This article studies the impact of disregarding contextual factors, the stability of school scores across time and the consistency of value-added performance for different cohorts within schools at a given point in time. The first two analyses replicate and extend previous studies using current data, confirming concerns about intake biases and showing that both secondary and primary level value-added measures exhibit worrying levels of instability. The third analysis goes further by examining whether instability across time is likely to stem from differences between cohorts and whether measures based on a single cohort reflect school performance more generally. Combined, these analyses suggest a general problem of imprecision within value-added estimates and that current policy use of value-added is unjustified. Published school performance measures are likely to be profoundly misleading, in particular for those unfamiliar with the level of uncertainty in the estimates. The article closes by considering whether value-added measures could and should be used by policy-makers as measures of school performance.
Feedback is an integral part of education and there is a substantial body of trials exploring and confirming its effect on learning. This evidence base comes mostly from studies of compulsory school age children; there is very little evidence to support effective feedback practice at higher education, beyond the frameworks and strategies advocated by those claiming expertise in the area. This systematic review aims to address this gap. We review causal evidence from trials of feedback and formative assessment in higher education. Although the evidence base is currently limited, our results suggest that low stakes‐quizzing is a particularly powerful approach and that there are benefits for forms of peer and tutor feedback, although these depend on implementation factors. There was mixed evidence for praise, grading and technology‐based feedback. We organise our findings into several evidence‐grounded categories and discuss the next steps for the field and evidence‐informed feedback practice in universities.
Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive.
The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.