BackgroundIntegrated knowledge translation (IKT) refers to collaboration between researchers and decision-makers. While advocated as an approach for enhancing the relevance and use of research, IKT is challenging and inconsistently applied. This study sought to inform future IKT practice and research by synthesizing studies that empirically evaluated IKT and identifying knowledge gaps.MethodsWe performed a scoping review. We searched MEDLINE, EMBASE, and the Cochrane Library from 2005 to 2014 for English language studies that evaluated IKT interventions involving researchers and organizational or policy-level decision-makers. Data were extracted on study characteristics, IKT intervention (theory, content, mode, duration, frequency, personnel, participants, timing from initiation, initiator, source of funding, decision-maker involvement), and enablers, barriers, and outcomes reported by studies. We performed content analysis and reported summary statistics.ResultsThirteen studies were eligible after screening 14,754 titles and reviewing 106 full-text studies. Details about IKT activities were poorly reported, and none were formally based on theory. Studies varied in the number and type of interactions between researchers and decision-makers; meetings were the most common format. All studies reported barriers and facilitators. Studies reported a range of positive and sub-optimal outcomes. Outcomes did not appear to be associated with initiator of the partnership, dedicated funding, partnership maturity, nature of decision-maker involvement, presence or absence of enablers or barriers, or the number of different IKT activities.ConclusionsThe IKT strategies that achieve beneficial outcomes remain unknown. We generated a summary of IKT approaches, enablers, barriers, conditions, and outcomes that can serve as the basis for a future review or for planning ongoing primary research. Future research can contribute to three identified knowledge gaps by examining (1) how different IKT strategies influence outcomes, (2) the relationship between the logic or theory underlying IKT interventions and beneficial outcomes, and (3) when and how decision-makers should be involved in the research process. Future IKT initiatives should more systematically plan and document their design and implementation, and evaluations should report the findings with sufficient detail to reveal how IKT was associated with outcomes.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-016-0399-1) contains supplementary material, which is available to authorized users.
This article is part of a series written for people responsible for making decisions about health policies and programmes and for those who support these decision makers.Policy dialogues allow research evidence to be considered together with the views, experiences and tacit knowledge of those who will be involved in, or affected by, future decisions about a high-priority issue. Increasing interest in the use of policy dialogues has been fuelled by a number of factors: 1. The recognition of the need for locally contextualised 'decision support' for policymakers and other stakeholders 2. The recognition that research evidence is only one input into the decision-making processes of policymakers and other stakeholders 3. The recognition that many stakeholders can add significant value to these processes, and 4. The recognition that many stakeholders can take action to address high-priority issues, and not just policymakers. In this article, we suggest questions to guide those organising and using policy dialogues to support evidence-informed policymaking. These are: 1. Does the dialogue address a high-priority issue? 2. Does the dialogue provide opportunities to discuss the problem, options to address the problem, and key implementation considerations? 3. Is the dialogue informed by a pre-circulated policy brief and by a discussion about the full range of factors that can influence the policymaking process? 4. Does the dialogue ensure fair representation among those who will be involved in, or affected by, future decisions related to the issue? 5. Does the dialogue engage a facilitator, follow a rule about not attributing comments to individuals, and not aim for consensus? 6. Are outputs produced and follow-up activities undertaken to support action?
BackgroundDeliberative dialogues have recently captured attention in the public health policy arena because they have the potential to address several key factors that influence the use of research evidence in policymaking. We conducted an evaluation of three deliberative dialogues convened in Canada by the National Collaborating Centre for Healthy Public Policy in order to learn more about deliberative dialogues focussed on healthy public policy.MethodsThe evaluation included a formative assessment of participants’ views about and experiences with ten key design features of the dialogues, and a summative assessment of participants’ intention to use research evidence of the type that was discussed at the dialogue. We surveyed participants immediately after each dialogue was completed and again six months later. We analyzed the ratings using descriptive statistics and the written comments by conducting a thematic analysis.ResultsA total of 31 individuals participated in the three deliberative dialogues that we evaluated. The response rate was 94% (N = 29; policymakers (n = 9), stakeholders (n = 18), researchers (n = 2)) for the initial survey and 56% (n = 14) for the follow-up. All 10 of the design features that we examined as part of the formative evaluation were rated favourably by all participant groups. The findings of the summative evaluation demonstrated a mean behavioural intention score of 5.8 on a scale from 1 (strongly disagree) to 7 (strongly agree).ConclusionOur findings reinforce the promise of deliberative dialogues as a strategy for supporting evidence-informed public health policies. Additional work is needed to understand more about which design elements work in which situations and for different issues, and whether intention to use research evidence is a suitable substitute for measuring actual behaviour change.
BackgroundPolicymakers, stakeholders and researchers have not been able to find research evidence about health systems using an easily understood taxonomy of topics, know when they have conducted a comprehensive search of the many types of research evidence relevant to them, or rapidly identify decision-relevant information in their search results.MethodsTo address these gaps, we developed an approach to building a ‘one-stop shop’ for research evidence about health systems. We developed a taxonomy of health system topics and iteratively refined it by drawing on existing categorization schemes and by using it to categorize progressively larger bundles of research evidence. We identified systematic reviews, systematic review protocols, and review-derived products through searches of Medline, hand searches of several databases indexing systematic reviews, hand searches of journals, and continuous scanning of listservs and websites. We developed an approach to providing ‘added value’ to existing content (e.g., coding systematic reviews according to the countries in which included studies were conducted) and to expanding the types of evidence eligible for inclusion (e.g., economic evaluations and health system descriptions). Lastly, we developed an approach to continuously updating the online one-stop shop in seven supported languages.ResultsThe taxonomy is organized by governance, financial, and delivery arrangements and by implementation strategies. The ‘one-stop shop’, called Health Systems Evidence, contains a comprehensive inventory of evidence briefs, overviews of systematic reviews, systematic reviews, systematic review protocols, registered systematic review titles, economic evaluations and costing studies, health reform descriptions and health system descriptions, and many types of added-value coding. It is continuously updated and new content is regularly translated into Arabic, Chinese, English, French, Portuguese, Russian, and Spanish.ConclusionsPolicymakers and stakeholders can now easily access and use a wide variety of types of research evidence about health systems to inform decision-making and advocacy. Researchers and research funding agencies can use Health Systems Evidence to identify gaps in the current stock of research evidence and domains that could benefit from primary research, systematic reviews, and review overviews.Electronic supplementary materialThe online version of this article (doi:10.1186/1478-4505-13-10) contains supplementary material, which is available to authorized users.
BackgroundCommunities of practice (CoPs) have been used in the health sector to support professional practice change. However, little is known about how CoPs might be used to influence a system that requires change at and across various levels (i.e. front line care, organizational, governmental). In this paper we examine the experience of a CoP in the Canadian province of Ontario as it engages in improving the care of seniors. Our aim is to shed light on using CoPs to facilitate systems change.MethodsThis paper draws on year one findings of a larger multiple case study that is aiming to increase understanding of knowledge translation processes mobilized through CoPs. In this paper we strategically report on one case to illustrate a critical example of a CoP trying to effect systems change. Primary data included semi-structured interviews with CoP members (n = 8), field notes from five planning meetings, and relevant background documents. Data analysis included deductive coding (i.e. pre-determined codes aligned with the larger project) and inductive coding which allowed codes and themes to emerge. A thorough description of the case was prepared using all the coded data.ResultsThe CoP recognized a need to support health professionals (nurses, dentists) and related paraprofessionals with knowledge, experience, and resources to appropriately address their clients’ oral health care needs. Accordingly, the CoP led a knowledge-to-action initiative that involved a seven-part webinar series meant to transfer step-by-step, skill-based knowledge through live and archived webinars. Although the core planning team functioned effectively to develop the webinars, the CoP was challenged by organizational and long-term care sector cultures, as well as governmental structures within the broader health context.ConclusionThe provincial CoP functioned as an incubator that brought together best practices, research, experiences, a reflective learning cycle, and passionate champions. Nevertheless, the CoP’s efforts to stimulate practice changes were met with broader resistance. Research about how to use CoPs to influence health systems change is needed given that CoPs are being tasked with this goal.Electronic supplementary materialThe online version of this article (doi:10.1186/s12961-015-0023-x) contains supplementary material, which is available to authorized users.
BackgroundAlthough measures of knowledge translation and exchange (KTE) effectiveness based on the theory of planned behavior (TPB) have been used among patients and providers, no measure has been developed for use among health system policymakers and stakeholders. A tool that measures the intention to use research evidence in policymaking could assist researchers in evaluating the effectiveness of KTE strategies that aim to support evidence-informed health system decision-making. Therefore, we developed a 15-item tool to measure four TPB constructs (intention, attitude, subjective norm and perceived control) and assessed its face validity through key informant interviews.MethodsWe carried out a reliability study to assess the tool's internal consistency and test-retest reliability. Our study sample consisted of 62 policymakers and stakeholders that participated in deliberative dialogues. We assessed internal consistency using Cronbach's alpha and generalizability (G) coefficients, and we assessed test-retest reliability by calculating Pearson correlation coefficients (r) and G coefficients for each construct and the tool overall.ResultsThe internal consistency of items within each construct was good with alpha ranging from 0.68 to alpha = 0.89. G-coefficients were lower for a single administration (G = 0.34 to G = 0.73) than for the average of two administrations (G = 0.79 to G = 0.89). Test-retest reliability coefficients for the constructs ranged from r = 0.26 to r = 0.77 and from G = 0.31 to G = 0.62 for a single administration, and from G = 0.47 to G = 0.86 for the average of two administrations. Test-retest reliability of the tool using G theory was moderate (G = 0.5) when we generalized across a single observation, but became strong (G = 0.9) when we averaged across both administrations.ConclusionThis study provides preliminary evidence for the reliability of a tool that can be used to measure TPB constructs in relation to research use in policymaking. Our findings suggest that the tool should be administered on more than one occasion when the intervention promotes an initial 'spike' in enthusiasm for using research evidence (as it seemed to do in this case with deliberative dialogues). The findings from this study will be used to modify the tool and inform further psychometric testing following different KTE interventions.
Background. Improved quality of care and control of healthcare costs are important factors influencing decisions to implement nurse practitioner (NP) and clinical nurse specialist (CNS) roles. Objective. To assess the quality of randomized controlled trials (RCTs) evaluating NP and CNS cost-effectiveness (defined broadly to also include studies measuring health resource utilization). Design. Systematic review of RCTs of NP and CNS cost-effectiveness reported between 1980 and July 2012. Results. 4,397 unique records were reviewed. We included 43 RCTs in six groupings, NP-outpatient (n = 11), NP-transition (n = 5), NP-inpatient (n = 2), CNS-outpatient (n = 11), CNS-transition (n = 13), and CNS-inpatient (n = 1). Internal validity was assessed using the Cochrane risk of bias tool; 18 (42%) studies were at low, 17 (39%) were at moderate, and eight (19%) at high risk of bias. Few studies included detailed descriptions of the education, experience, or role of the NPs or CNSs, affecting external validity. Conclusions. We identified 43 RCTs evaluating the cost-effectiveness of NPs and CNSs using criteria that meet current definitions of the roles. Almost half the RCTs were at low risk of bias. Incomplete reporting of study methods and lack of details about NP or CNS education, experience, and role create challenges in consolidating the evidence of the cost-effectiveness of these roles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.