Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
For decades there have been calls by concerned stakeholders to improve the quality of education research, and some progress has been made towards creating a more secure evidence base in some areas. More programmes and approaches that have a reasonable evidence base are now also being used in schools (but not in policy, and not necessarily because they have a reasonable evidence base). However, there has been no equivalent improvement in secure knowledge about how best to get that evidence into use, or even what difference it makes when such evidence is used. This paper looks at what little is already known about the different ways to get research evidence into use in education by summarising the results of a large-scale review of the literature. A total of 323 of the most relevant studies were looked at across all areas of public policy, and judged for quality and contribution. Very few (33) were of the appropriate design and quality needed to make robust causal claims about evidence-intouse, and even fewer of these concerned education. This means that despite over 20 years of modest improvement in research on what works in education policy and practice, the evidence on how best to deploy these findings is still very weak. We consider studies in terms of several issues, including whether they look at changes in user knowledge and behaviour, or student outcomes, and how evidence is best modified before use. Providing access to raw research evidence or even slightly simplified evidence is not generally an effective way of getting it used, even if that evidence is presented to users by knowledge-brokers, in short courses or similar. What is more likely to work for both policy and practice is the engineering of high quality evidence into a more usable format and presenting it actively or iteratively via a respected and trusted conduit, or through population measures such as legislation. Having the users actually do the research is another promising approach. Expecting each individual study they fund to have an impact is not the way forward, as this may encourage widespread use of ineffective or even harmful interventions. Publicly-funded users, including policy-makers, should be required to use evidence-led programmes from those libraries providing them and which are appropriate and relevant to their aims. Research funders should support these approaches, and help to build up libraries of successfully tested programmes. Researchers need to be scrupulous, looking at their new evidence in the context of what is already known and not looking to obtain 'impact' from single studies. More and better research is needed on the best routes for evidence-into-use. However, the improvements required of all parties are as much ethical in nature as they are technical or scientific.Keywords use of evidence, engineering of evidence, translation of evidence, research impact, knowledge transfer, robust evidence.
For decades there have been calls by concerned stakeholders to improve the quality of education research, and some progress has been made towards creating a more secure evidence base in some areas. More programmes and approaches that have a reasonable evidence base are now also being used in schools (but not in policy, and not necessarily because they have a reasonable evidence base). However, there has been no equivalent improvement in secure knowledge about how best to get that evidence into use, or even what difference it makes when such evidence is used. This paper looks at what little is already known about the different ways to get research evidence into use in education by summarising the results of a large-scale review of the literature. A total of 323 of the most relevant studies were looked at across all areas of public policy, and judged for quality and contribution. Very few (33) were of the appropriate design and quality needed to make robust causal claims about evidence-intouse, and even fewer of these concerned education. This means that despite over 20 years of modest improvement in research on what works in education policy and practice, the evidence on how best to deploy these findings is still very weak. We consider studies in terms of several issues, including whether they look at changes in user knowledge and behaviour, or student outcomes, and how evidence is best modified before use. Providing access to raw research evidence or even slightly simplified evidence is not generally an effective way of getting it used, even if that evidence is presented to users by knowledge-brokers, in short courses or similar. What is more likely to work for both policy and practice is the engineering of high quality evidence into a more usable format and presenting it actively or iteratively via a respected and trusted conduit, or through population measures such as legislation. Having the users actually do the research is another promising approach. Expecting each individual study they fund to have an impact is not the way forward, as this may encourage widespread use of ineffective or even harmful interventions. Publicly-funded users, including policy-makers, should be required to use evidence-led programmes from those libraries providing them and which are appropriate and relevant to their aims. Research funders should support these approaches, and help to build up libraries of successfully tested programmes. Researchers need to be scrupulous, looking at their new evidence in the context of what is already known and not looking to obtain 'impact' from single studies. More and better research is needed on the best routes for evidence-into-use. However, the improvements required of all parties are as much ethical in nature as they are technical or scientific.Keywords use of evidence, engineering of evidence, translation of evidence, research impact, knowledge transfer, robust evidence.
The use of targeted additional funding for school-age education, intended to improve student attainment, is a widespread phenomenon internationally. It is slightly rarer that the funding is used to improve attainment specifically for the most disadvantaged studentsoften via trying to attract teachers to poorer areas, or encouraging families to send their children to school. It is even rarer that funding is used to try and reduce the attainment gap between economically disadvantaged students and their peers, and almost unheard for the funding to be intended to change the nature of school intakes by making disadvantaged students more attractive to schools. These last two were the objectives set for Pupil Premium funding to schools in England.The funding started in 2011, for all state-funded schools at the same time, so there is no easy counterfactual to help assess how effective it has been. The funding is a considerable investment every year and it is therefore important to know whether it works as intended. This paper presents a time series analysis of all students at secondary school in England from 2006, well before the funding started, until 2019, the most recent year for which there are attainment figures. It overcomes concerns that the official attainment gap between students labelled disadvantaged and the rest is sensitive| 447
Review Rationale and ContextMany intervention studies of summer programmes examine their impact on employment and education outcomes, however there is growing interest in their effect on young people's offending outcomes. Evidence on summer employment programmes shows promise on this but has not yet been synthesised. This report fills this evidence gap through a systematic review and meta‐analysis, covering summer education and summer employment programmes as their contexts and mechanisms are often similar.Research ObjectiveThe objective is to provide evidence on the extent to which summer programmes impact the outcomes of disadvantaged or ‘at risk’ young people.MethodsThe review employs mixed methods: we synthesise quantitative information estimating the impact of summer programme allocation/participation across the outcome domains through meta‐analysis using the random‐effects model; and we synthesise qualitative information relating to contexts, features, mechanisms and implementation issues through thematic synthesis. Literature searches were largely conducted in January 2023. Databases searched include: Scopus; PsychInfo; ERIC; the YFF‐EGM; EEF's and TASO's toolkits; RAND's summer programmes evidence review; key academic journals; and Google Scholar. The review employed PICOSS eligibility criteria: the population was disadvantaged or ‘at risk’ young people aged 10–25; interventions were either summer education or employment programmes; a valid comparison group that did not experience a summer programme was required; studies had to estimate the summer programme's impact on violence and offending, education, employment, socio‐emotional and/or health outcomes; eligible study designs were experimental and quasi‐experimental; eligible settings were high‐income countries. Other eligibility criteria included publication in English, between 2012 and 2022. Process/qualitative evaluations associated with eligible impact studies or of UK‐based interventions were also included; the latter given the interests of the sponsors. We used standard methodological procedures expected by The Campbell Collaboration. The search identified 68 eligible studies; with 41 eligible for meta‐analysis. Forty‐nine studies evaluated 36 summer education programmes, and 19 studies evaluated six summer employment programmes. The number of participants within these studies ranged from less than 100 to nearly 300,000. The PICOSS criteria affects the external applicability of the body of evidence – allowances made regarding study design to prioritise evidence on UK‐based interventions limits our ability to assess impact for some interventions. The risk of bias assessment categorised approximately 75% of the impact evaluations as low quality, due to attrition, losses to follow up, interventions having low take‐up rates, or where allocation might introduce selection bias. As such, intention‐to‐treat analyses are prioritised. The quality assessment rated 93% of qualitative studies as low quality often due to not employing rigorous qualitative methodologies. These results highlight the need to improve the evidence.Results and ConclusionsQuantitative synthesis The quantitative synthesis examined impact estimates across 34 outcomes, through meta‐analysis (22) or in narrative form (12). We summarise below the findings where meta‐analysis was possible, along with the researchers' judgement of the security of the findings (high, moderate or low). This was based on the number and study‐design quality of studies evaluating the outcome; the consistency of findings; the similarity in specific outcome measures used; and any other specific issues which might affect our confidence in the summary findings.Below we summarise the findings from the meta‐analyses conducted to assess the impact of allocation to/participation in summer education and employment programmes (findings in relation to other outcomes are also discussed in the main body, but due to the low number of studies evaluating these, meta‐analysis was not performed). We only cover the pooled results for the two programme types where there are not clear differences in findings between summer education and summer employment programmes, so as to avoid potentially attributing any impact to both summer programme types when this is not the case. We list the outcome measure, the average effect size type (i.e., whether a standardised mean difference (SMD) or log odds ratio), which programme type the finding is in relation to and then the average effect size along with its 95% confidence interval and the interpretation of the finding, that is, whether there appears to be a significant impact and in which direction (positive or negative, clarifying instances where a negative impact is beneficial). In some instances there may be a discrepancy between the 95% confidence interval and whether we determine there to be a significant impact, which will be due to the specifics of the process for constructing the effect sizes used in the meta‐analysis. We then list the I2 statistic and the p‐value from the homogeneity test as indications of the presence of heterogeneity. As the sample size used in the analysis are often small and the homogeneity test is known to be under‐powered with small sample sizes, it may not detect statistically significant heterogeneity when it is in fact present. As such, a 90% confidence level threshold should generally be used when interpreting this with regard to the meta‐analyses below. The presence of effect size heterogeneity affects the extent to which the average effects size is applicable to all interventions of that summer programme type. We also provide an assessment of the relative confidence we have in the generalisability of the overall finding (low, moderate or high) – some of the overall findings are based on a small sample of studies, the studies evaluating the outcome may be of low quality, there may be wide variation in findings among the studies evaluating the outcome, or there may be specific aspects of the impact estimates included or the effect sizes constructed that affect the generalisability of the headline finding. These issues are detailed in full in the main body of the review. Engagement with/participation in/enjoyment of education (SMD): Secondary education attendance (SMD): Passing tests (log OR): Reading test scores (SMD): English test scores (SMD): Mathematics test scores (SMD): Overall test scores (SMD): All test scores (SMD): Negative behavioural outcomes (log OR): Progression to HE (log OR): Complete HE (log OR): Entry to employment, short‐term (log OR): Likelihood of having a criminal justice outcome (log OR): Likelihood of having a drug‐related criminal justice outcome (log OR): Likelihood of having a violence‐related criminal justice outcome (log OR): Likelihood of having a property‐related criminal justice outcome (log OR): Number of criminal justice outcomes, during programme (SMD): Number of criminal justice outcomes, post‐programme (SMD): Number of drug‐related criminal justice outcomes, post‐programme (SMD): Number of violence‐related criminal justice outcomes, post‐programme (SMD): Number of property‐related criminal justice outcomes, post‐programme (SMD):We re‐express instances of significant impact by programme type where we have moderate or high confidence in the security of findings by translating this to a form used by one of the studies, to aid understanding of the findings. Allocation to a summer education programme results in approximately 60% of individuals moving from never reading for fun to doing so once or twice a month (engagement in/participation in/enjoyment of education), and an increase in the English Grade Point Average of 0.08. Participation in a summer education programme results in an increase in overall Grade Point Average of 0.14 and increases the likelihood of completing higher education by 1.5 times. Signs are positive for the effectiveness of summer education programmes in achieving some of the education outcomes considered (particularly on test scores (when pooled across types), completion of higher education and STEM‐related higher education outcomes), but the evidence on which overall findings are based is often weak. Summer employment programmes appear to have a limited impact on employment outcomes, if anything, a negative impact on the likelihood of entering employment outside of employment related to the programme. The evidence base for impacts of summer employment programmes on young people's violence and offending type outcomes is currently limited – where impact is detected this largely results in substantial reductions in criminal justice outcomes, but the variation in findings across and within studies affects our ability to make any overarching assertions with confidence. In understanding the effectiveness of summer programmes, the order of outcomes also requires consideration – entries into education from a summer employment programme might be beneficial if this leads towards better quality employment in the future and a reduced propensity of criminal justice outcomes.Qualitative SynthesisVarious shared features among different summer education programmes emerged from the review, allowing us to cluster specific types of these interventions which then aided the structuring of the thematic synthesis. The three distinct clusters for summer education programmes were: catch‐up programmes addressing attainment gaps, raising aspirations programmes inspiring young people to pursue the next stage of their education or career, and transition support programmes facilitating smooth transitions between educational levels. Depending on their aim, summer education programme tend to provide a combination of: additional instruction on core subjects (e.g., English, mathematics); academic classes including to enhance specialist subject knowledge (e.g., STEM‐related); homework help; coaching and mentoring; arts and recreation electives; and social and enrichment activities. Summer employment programmes provide paid work placements or subsidised jobs typically in entry‐level roles mostly in the third and public sectors, with some summer employment programmes also providing placements in the private sector. They usually include components of pre‐work training and employability skills, coaching and mentoring. There are a number of mechanisms which act as facilitators or barriers to engagement in summer programmes. These include tailoring the summer programme to each young person and individualised attention; the presence of well‐prepared staff who provide effective academic/workplace and socio‐emotional support; incentives of a monetary (e.g., stipends and wages) or non‐monetary (e.g., free transport and meals) nature; recruitment strategies, which are effective at identifying, targeting and engaging participants who can most benefit from the intervention; partnerships, with key actors who can help facilitate referrals and recruitment, such as schools, community action and workforce development agencies; format, including providing social activities and opportunities to support the formation of connections with peers; integration into the workplace, through pre‐placement engagement, such as through orientation days, pre‐work skills training, job fairs, and interactions with employers ahead of the beginning of the summer programme; and skill acquisition, such as improvements in social skills. In terms of the causal processes which lead from engagement in a summer programme to outcomes, these include: skill acquisition, including academic, social, emotional, and life skills; positive relationships with peers, including with older students as mentors in summer education programmes; personalised and positive relationships with staff; location, including accessibility and creating familiar environments; creating connections between the summer education programme and the students' learning at home to maintain continuity and reinforce learning; and providing purposeful and meaningful work through summer employment programmes (potentially facilitated through the provision of financial and/or non‐financial incentives), which makes participants more likely to see the importance of education in achieving their life goals and this leads to raised aspirations. It is important to note that no single element of a summer programme can be identified as generating the causal process for impact, and impact results rather from a combination of elements. Finally, we investigated strengths and weaknesses in summer programmes at both the design and implementation stages. In summer education programmes, design strengths include interactive and alternative learning modes; iterative and progressive content building; incorporating confidence building activities; careful lesson planning; and teacher support which is tailored to each student. Design weaknesses include insufficient funding or poor funding governance (e.g., delays to funding); limited reach of the target population; and inadequate allocation of teacher and pupil groups (i.e., misalignment between the education stage of the pupils and the content taught by staff). Implementation strengths include clear programme delivery guidance and good governance; high quality academic instruction; mentoring support; and strong partnerships. Implementation weaknesses include insufficient planning and lead in time; recruitment challenges; and variability in teaching quality. In summer employment programmes, design strengths include use of employer orientation materials and supervisor handbooks; careful consideration of programme staff roles; a wide range of job opportunities; and building a network of engaged employers. Design weaknesses are uncertainty over funding and budget agreements; variation in delivery and quality of training between providers; challenges in recruitment of employers; and caseload size and management. Implementation strengths include effective job matching; supportive relationships with supervisors; pre‐work training; and mitigating attrition (e.g., striving to increase take up of the intervention among the treatment group). Implementation weaknesses are insufficient monitors for the number of participants, and challenges around employer availability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.