The 'how to' of scaling up public health interventions for maximum reach and outcomes is receiving greater attention; however, there remains a paucity of practical tools to guide those actively involved in scaling up processes in high-income countries. To fill this gap, the New South Wales Ministry of Health developed Increasing the scale of population health interventions: a guide (2014). The guide was informed by a systematic review of scaling up models and methods, and a two-round Delphi process with a sample of senior policy makers, practitioners and researchers actively involved in scaling up processes.Although it is a practical guide to assist health policy makers, health practitioners and others responsible for scaling up effective population health interventions, it can also be used by researchers in the design of research studies that are potentially suitable for scaling up, particularly where research-practice collaborations are involved. The guide is divided into four steps: step 1, 'scalability assessment', aims to determine if an intervention is scalable; step 2, 'developing a scale up plan', aims to develop a practical and workable scaling up plan that can be used to convince stakeholders there is a compelling case for action.Step 3, 'preparing for scale up', aims to identify ways of securing resources needed for going to scale, operating at scale, and building a foundation of legitimacy and support to sustain the scaling up effort through the implementation stage; and step 4, 'scaling up the intervention', involves putting the plan developed in step 2 into place.Although the guide is written as though the user is starting from the point of assessing the scalability of an intervention, later steps can be used by those already involved in scaling up to review their implementation processes. The guide is not intended to be prescriptive. Its purpose is to help policy makers, practitioners, researchers and other decision makers decide on appropriate methodological and practical choices, and balance what is desirable with what is feasible.
BackgroundDecisions to scale up population health interventions from small projects to wider state or national implementation is fundamental to maximising population-wide health improvements. The objectives of this study were to examine: i) how decisions to scale up interventions are currently made in practice; ii) the role that evidence plays in informing decisions to scale up interventions; and iii) the role policy makers, practitioners, and researchers play in this process.MethodsInterviews with an expert panel of senior Australian and international public health policy-makers (n = 7), practitioners (n = 7), and researchers (n = 7) were conducted in May 2013 with a participation rate of 84%.ResultsScaling up decisions were generally made through iterative processes and led by policy makers and/or practitioners, but ultimately approved by political leaders and/or senior executives of funding agencies. Research evidence formed a component of the overall set of information used in decision-making, but its contribution was limited by the paucity of relevant intervention effectiveness research, and data on costs and cost effectiveness. Policy makers, practitioners/service managers, and researchers had different, but complementary roles to play in the process of scaling up interventions.ConclusionsThis analysis articulates the processes of how decisions to scale up interventions are made, the roles of evidence, and contribution of different professional groups. More intervention research that includes data on the effectiveness, reach, and costs of operating at scale and key service delivery issues (including acceptability and fit of interventions and delivery models) should be sought as this has the potential to substantially advance the relevance and ultimately usability of research evidence for scaling up population health action.
BackgroundThere is a growing emphasis on the importance of research having demonstrable public benefit. Measurements of the impacts of research are therefore needed. We applied a modified impact assessment process that builds on best practice to 5 years (2003–2007) of intervention research funded by Australia’s National Health and Medical Research Council to determine if these studies had post-research real-world policy and practice impacts.MethodsWe used a mixed method sequential methodology whereby chief investigators of eligible intervention studies who completed two surveys and an interview were included in our final sample (n = 50), on which we conducted post-research impact assessments. Data from the surveys and interviews were triangulated with additional information obtained from documentary analysis to develop comprehensive case studies. These case studies were then summarized and the reported impacts were scored by an expert panel using criteria for four impact dimensions: corroboration; attribution, reach, and importance.ResultsNineteen (38%) of the cases in our final sample were found to have had policy and practice impacts, with an even distribution of high, medium, and low impact scores. While the tool facilitated a rigorous and explicit criterion-based assessment of post-research impacts, it was not always possible to obtain evidence using documentary analysis to corroborate the impacts reported in chief investigator interviews.ConclusionsWhile policy and practice is ideally informed by reviews of evidence, some intervention research can and does have real world impacts that can be attributed to single studies. We recommend impact assessments apply explicit criteria to consider the corroboration, attribution, reach, and importance of reported impacts on policy and practice. Impact assessments should also allow sufficient time between impact data collection and completion of the original research and include mechanisms to obtain end-user input to corroborate claims and reduce biases that result from seeking information from researchers only.
BackgroundIntervention research provides important information regarding feasible and effective interventions for health policy makers, but few empirical studies have explored the mechanisms by which these studies influence policy and practice. This study provides an exploratory case series analysis of the policy, practice and other related impacts of the 15 research projects funded through the New South Wales Health Promotion Demonstration Research Grants Scheme during the period 2000 to 2006, and explored the factors mediating impacts.MethodsData collection included semi-structured interviews with the chief investigators (n = 17) and end-users (n = 29) of each of the 15 projects to explore if, how and under what circumstances the findings had been used, as well as bibliometric analysis and verification using documentary evidence. Data analysis involved thematic coding of interview data and triangulation with other data sources to produce case summaries of impacts for each project. Case summaries were then individually assessed against four impact criteria and discussed at a verification panel meeting where final group assessments of the impact of research projects were made and key influences of research impact identified.ResultsFunded projects had variable impacts on policy and practice. Project findings were used for agenda setting (raising awareness of issues), identifying areas and target groups for interventions, informing new policies, and supporting and justifying existing policies and programs across sectors. Reported factors influencing the use of findings were: i) nature of the intervention; ii) leadership and champions; iii) research quality; iv) effective partnerships; v) dissemination strategies used; and, vi) contextual factors.ConclusionsThe case series analysis provides new insights into how and under what circumstances intervention research is used to influence real world policy and practice. The findings highlight that intervention research projects can achieve the greatest policy and practice impacts if they address proximal needs of the policy context by engaging end-users from the inception of projects and utilizing existing policy networks and structures, and using a range of strategies to disseminate findings that go beond traditional peer review publications.
BackgroundThere is growing interest by funding bodies and researchers in assessing the impact of research on real world policy and practice. Population health monitoring surveys provide an important source of data on the prevalence and patterns of health problems, but few empirical studies have explored if and how such data is used to influence policy or practice decisions. Here we provide a case study analysis of how the findings from an Australian population monitoring survey series of children’s weight and weight-related behaviors (Schools Physical Activity and Nutrition Survey (SPANS)) have been used, and the key facilitators and barriers to their utilization.MethodsData collection included semi-structured interviews with the chief investigators (n = 3) and end-users (n = 9) of SPANS data to explore if, how and under what circumstances the survey findings had been used, bibliometric analysis and verification using documentary evidence. Data analysis involved thematic coding of interview data and triangulation with other data sources to produce case summaries of policy and practice impacts for each of the three survey years (1997, 2004, 2010). Case summaries were then reviewed and discussed by the authors to distil key themes on if, how and why the SPANS findings had been used to guide policy and practice.ResultsWe found that the survey findings were used for agenda setting (raising awareness of issues), identifying areas and target groups for interventions, informing new policies, and supporting and justifying existing policies and programs across a range of sectors. Reported factors influencing use of the findings were: i) the perceived credibility of survey findings; ii) dissemination strategies used; and, iii) a range of contextual factors.ConclusionsUsing a novel approach, our case study provides important new insights into how and under what circumstances population health monitoring data can be used to influence real world policy and practice. The findings highlight the importance of population monitoring programs being conducted by independent credible agencies, researchers engaging end-users from the inception of survey programs and utilizing existing policy networks and structures, and using a range of strategies to disseminate the findings that go beyond traditional peer review publications.
BackgroundMeasuring the policy and practice impacts of research is becoming increasingly important. Policy impacts can be measured from two directions – tracing forward from research and tracing backwards from a policy outcome. In this review, we compare these approaches and document the characteristics of studies assessing research impacts on policy and the policy utilisation of research.MethodsKeyword searches of electronic databases were conducted in December 2016. Included studies were published between 1995 and 2016 in English and reported methods and findings of studies measuring policy impacts of specified health research, or research use in relation to a specified health policy outcome, and reviews reporting methods of research impact assessment. Using an iterative data extraction process, we developed a framework to define the key elements of empirical studies (assessment reason, assessment direction, assessment starting point, unit of analysis, assessment methods, assessment endpoint and outcomes assessed) and then documented the characteristics of included empirical studies according to this framework.ResultsWe identified 144 empirical studies and 19 literature reviews. Empirical studies were derived from two parallel streams of research of equal size, which we termed ‘research impact assessments’ and ‘research use assessments’. Both streams provided insights about the influence of research on policy and utilised similar assessment methods, but approached measurement from opposite directions. Research impact assessments predominantly utilised forward tracing approaches while the converse was true for research use assessments. Within each stream, assessments focussed on narrow or broader research/policy units of analysis as the starting point for assessment, each with associated strengths and limitations. The two streams differed in terms of their relative focus on the contributions made by specific research (research impact assessments) versus research more generally (research use assessments) and the emphasis placed on research and the activities of researchers in comparison to other factors and actors as influencers of change.ConclusionsThe Framework presented in this paper provides a mechanism for comparing studies within this broad field of research enquiry. Forward and backward tracing approaches, and their different ways of ‘looking’, tell a different story of research-based policy change. Combining approaches may provide the best way forward in terms of linking outcomes to specific research, as well as providing a realistic picture of research influence.Electronic supplementary materialThe online version of this article (10.1186/s12961-018-0310-4) contains supplementary material, which is available to authorized users.
BackgroundCitation of research in policy documents has been suggested as an indicator of the potential longer-term impacts of research. We investigated the use of research citations in childhood obesity prevention policy documents from New South Wales (NSW), Australia, considering the feasibility and value of using research citation as a proxy measure of research impact.MethodsWe examined childhood obesity policy documents produced between 2000 and 2015, extracting childhood obesity-related references and coding these according to reference type, geographical origin and type of research. A content analysis of the policy documents examined where and how research was cited in the documents and the context of citation for individual research publications.ResultsOver a quarter (28%) of the policy documents (n = 86) were not publicly available, almost two-thirds (63%) contained references, half (47%) cited obesity-related research and over a third (41%) of those containing references used unorthodox referencing styles, making reference extraction laborious. No patterns, in terms of the types of documents more likely to cite research, were observed and the number of obesity research publications cited per document was highly variable. In total, 263 peer-reviewed and 94 non-peer-reviewed obesity research publications were cited. Research was most commonly cited to support a policy argument or choice of solution. However, it was not always possible to determine how or why individual publications were cited or whether the cited research itself had influenced the policy process. Content analysis identified circumstances where research was mentioned or considered, but not directly cited.ConclusionsCitation of research in policy documents in this case did not always provide evidence that the cited research had influenced the policy process, only that it was accessible and relevant to the content of the policy document. Research citation across these public health policy documents varied greatly and is unlikely to be an accurate reflection of actual research use by the policy agencies involved. The links between citation and impact may be more easily drawn in specific policy areas or types of documents (e.g. clinical guidelines), where research appraisal feeds directly into policy recommendations.
ObjectivesTo investigate researchers’ perceptions about the factors that influenced the policy and practice impacts (or lack of impact) of one of their own funded intervention research studies.DesignMixed method, cross-sectional study.SettingIntervention research conducted in Australia and funded by Australia's National Health and Medical Research Council between 2003 and 2007.ParticipantsThe chief investigators from 50 funded intervention research studies were interviewed to determine if their study had achieved policy and practice impacts, how and why these impacts had (or had not) occurred and the approach to dissemination they had employed.ResultsWe found that statistically significant intervention effects and publication of results influenced whether there were policy and practice impacts, along with factors related to the nature of the intervention itself, the researchers’ experience and connections, their dissemination and translation efforts, and the postresearch context.ConclusionsThis study indicates that sophisticated approaches to intervention development, dissemination actions and translational efforts are actually widespread among experienced researches, and can achieve policy and practice impacts. However, it was the links between the intervention results, further dissemination actions by researchers and a variety of postresearch contextual factors that ultimately determined whether a study had policy and practice impacts. Given the complicated interplay between the various factors, there appears to be no simple formula for determining which intervention studies should be funded in order to achieve optimal policy and practice impacts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.