A version of this article was developed as a background paper for the 2013 Inquiry, by the British Educational Research Association and the Royal Society for the Encouragement of Arts, Manufactures and Commerce, into the contribution of research in teacher education and school improvement, and can be found in the BERA website (www.bera.ac.uk).
BackgroundFunders of medical research the world over are increasingly seeking, in research assessment, to complement traditional output measures of scientific publications with more outcome-based indicators of societal and economic impact. In the United Kingdom, the Higher Education Funding Council for England (HEFCE) developed proposals for the Research Excellence Framework (REF) to allocate public research funding to higher education institutions, inter alia, on the basis of the social and economic impact of their research. In 2010, it conducted a pilot exercise to test these proposals and refine impact indicators and criteria.MethodsThe impact indicators proposed in the 2010 REF impact pilot exercise are critically reviewed and appraised using insights from the relevant literature and empirical data collected for the University of Oxford’s REF pilot submission in clinical medicine. The empirical data were gathered from existing administrative sources and an online administrative survey carried out by the university’s Medical Sciences Division among 289 clinical medicine faculty members (48.1% response rate).ResultsThe feasibility and scope of measuring research impact in clinical medicine in a given university are assessed. Twenty impact indicators from seven categories proposed by HEFCE are presented; their strengths and limitations are discussed using insights from the relevant biomedical and research policy literature.ConclusionsWhile the 2010 pilot exercise has confirmed that the majority of the proposed indicators have some validity, there are significant challenges in operationalising and measuring these indicators reliably, as well as in comparing evidence of research impact across different cases in a standardised manner. It is suggested that the public funding agencies, medical research charities, universities, and the wider medical research community work together to develop more robust methodologies for capturing and describing impact, including more valid and reliable impact indicators.
Critics of education research in the recent years have pointed the finger at what they saw as its low quality, impact, and 'value for money'. In the context of the Research Assessment Exercise, particular concerns have been raised about applied and practicebased educational research and how best to assess its quality. This paper refines the ideas originally developed as part of a project commissioned by the ESRC in 2004 and completed in 2005. It argues that quality in applied and practice-based research cannot be reduced to narrow definitions of "scientificity", "impact", or economic efficiency. The paper proposes an account of quality in applied and practice-based educational research which encompasses methodological and theoretical solidity, use and impact, but also dialogue, deliberation, participation, ethics and personal growth. Drawing on Aristotelian distinctions between forms of rational activity and their expressions of excellence or virtue, our account emphasises the synergy between three domains of excellence in applied and practice-based research: theoretical (episteme); technical (techne); and practical (phronesis). The thrust of the paper is not to set any standards of good research practice, but simply to make progress towards recapturing a cultural and philosophical dimension of research assessment that had been lost in recent official discourses.
The article is an exploration of the meanings and worthiness of criticism as a significant phenomenon in the evolution of educational research during the 1990s. While drawing on an overview of the vast amount of documents expressing criticisms of educational research in the UK, western and eastern continental Europe and the USA, it summarises the findings of a study based on the analysis of some of the most influential texts that criticised educational research in the UK during the mid‐1990s: Hargreaves (1996), Tooley and Darby (1998), Hillage et al. (1998). An understanding of the targets, sources, solutions and actors that are characteristic of the recent criticisms of educational research is proposed, together with an exploration of the rhetorical devices employed in expressing criticism and of some of the philosophical themes that underpin the recent debates.
This paper explores recent public debates around research assessment and its future as part of a dynamic landscape of governance discourses and practices, and organisational, professional and disciplinary cultures. Drawing reflectively on data from RAE 2001, RAE 2008 and REF 2014 (reported elsewhere), the paper highlights how recent debates around research assessment echo longer-term changes in research governance. The following changes, and several critiques of their implications, are discussed: shifts in the principles for governing research and the rise of multipurpose assessment; the spread of performance-based funding and external accountability for research; the use of metrics and indicators in research assessment; the boundary work taking place in defining and classifying units or fields for assessment; the emphasis on research impact as a component of research value; organisational recalibration across the sector; and the specialisation of blended professional practice. These changes are underpinned by persistent tensions around accountability; evaluation; measurement; demarcation; legitimation; agency; and identity in research. Overall, such trends and the discursive shifts that made them possible have challenged established principles of funding and governance and have pushed assessment technologies into a pivot position in the political dynamics of renegotiating the relationships between universities and the state. Jointly, the directions of travel identified in this paper describe a widespread and persistent regime of research governance and policy that has become embedded in institutional and individual practices.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.