Background: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus.
E839C linical practice guidelines, which are systematically developed statements aimed at helping people make clinical, policy-related and system-related decisions, 1,2 frequently vary widely in quality. 3,4 A strategy was needed to differentiate among guidelines and ensure that those of the highest quality are implemented.An international team of guideline developers and researchers, known as the AGREE Collaboration (Appraisal of Guidelines, Research and Evaluation), was established to create a generic instrument to assess the process of guideline development and reporting of this process in the guideline. Based on rigorous methodologies, the result of the collaboration's efforts was the original AGREE instrument, which is a 23-item tool comprising six quality-related domains that was released in 2003 (www.agreetrust.org).As with any new assessment tool, ongoing development was required to improve its measurement properties, usefulness to a range of stakeholders and ease of implementation. Over the years, a number of issues were identified. For example, the original four-point response scale used to answer each item of the AGREE instrument is not in compliance with methodologic standards of health measurement design. This noncompliance threatens the performance and reliability of the instrument. 5 In addition, data on the usefulness of the AGREE items has never been gathered systematically from the perspectives of different groups of users. Further, we were interested in identifying strategies to make the evaluation process more efficient, such as reducing the number of items or the number of required raters, while ensuring the instrument was reliable and valid. Therefore, an exploration of the role of shorter versions of the AGREE instrument, comprising fewer items that are tailored to the unique priorities of different stakeholders, was warranted. Finally, there was a need to establish the fundamentals of construct validity -in other words, whether the AGREE items could measure what they purport to measure, and that is variability in quality of practice guidelines. Redesign of AGREEIn response to these issues, the AGREE Next Steps Consortium was established and undertook two studies.6,7 As part of the first study, the consortium introduced a new seven-point response scale and evaluated its performance and measurement properties, analyzed the usefulness of the AGREE items for decisions made by different stakeholders, and systematically elicited stakeholders' recommendations for changes to the AGREE items and domains. 6 In the second study, the consortium evaluated the construct validity of the tool and designed and evaluated new supporting documentation aimed at facilitating efficient and accurate use of the tool. 7The following key findings emerged from the two studies:• Ratings of the quality of the AGREE domains are good predictors of outcomes associated with implementation of guidelines. 6• Participants (i.e., guideline developers or researchers, policy-makers, and clinicians) evaluated AGREE items and dom...
BackgroundOne of the most consistent findings from clinical and health services research is the failure to translate research into practice and policy. As a result of these evidence-practice and policy gaps, patients fail to benefit optimally from advances in healthcare and are exposed to unnecessary risks of iatrogenic harms, and healthcare systems are exposed to unnecessary expenditure resulting in significant opportunity costs. Over the last decade, there has been increasing international policy and research attention on how to reduce the evidence-practice and policy gap. In this paper, we summarise the current concepts and evidence to guide knowledge translation activities, defined as T2 research (the translation of new clinical knowledge into improved health). We structure the article around five key questions: what should be transferred; to whom should research knowledge be transferred; by whom should research knowledge be transferred; how should research knowledge be transferred; and, with what effect should research knowledge be transferred?DiscussionWe suggest that the basic unit of knowledge translation should usually be up-to-date systematic reviews or other syntheses of research findings. Knowledge translators need to identify the key messages for different target audiences and to fashion these in language and knowledge translation products that are easily assimilated by different audiences. The relative importance of knowledge translation to different target audiences will vary by the type of research and appropriate endpoints of knowledge translation may vary across different stakeholder groups. There are a large number of planned knowledge translation models, derived from different disciplinary, contextual (i.e., setting), and target audience viewpoints. Most of these suggest that planned knowledge translation for healthcare professionals and consumers is more likely to be successful if the choice of knowledge translation strategy is informed by an assessment of the likely barriers and facilitators. Although our evidence on the likely effectiveness of different strategies to overcome specific barriers remains incomplete, there is a range of informative systematic reviews of interventions aimed at healthcare professionals and consumers (i.e., patients, family members, and informal carers) and of factors important to research use by policy makers.SummaryThere is a substantial (if incomplete) evidence base to guide choice of knowledge translation activities targeting healthcare professionals and consumers. The evidence base on the effects of different knowledge translation approaches targeting healthcare policy makers and senior managers is much weaker but there are a profusion of innovative approaches that warrant further evaluation.
, J. M. (2010). What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychology & Health, 25(10), pp. 1229-1245. doi: 10.1080 This is the unspecified version of the paper.This version of the publication may differ from the final published version. Permanent 2What is an adequate sample size? Operationalising data saturation for theory-based interview studies AbstractIn interview studies, sample size is often justified by interviewing participants until reaching "data saturation". However, there is no agreed method of establishing this. We propose principles for deciding saturation in theory-based interview studies (where conceptual categories are pre-established by existing theory). First, specify a minimum sample size for initial analysis (initial analysis sample). Second, specify how many more interviews will be conducted without new ideas emerging (stopping criterion). We demonstrate these principles in two studies, based on Theory of Planned Behaviour, designed to identify three belief categories (Behavioural, Normative, Control), using an initial analysis sample of 10 and stopping criterion of 3. Study 1 (retrospective analysis of existing data) identified 84 shared beliefs of 14 general medical practitioners about managing patients with sore throat without prescribing antibiotics. The criterion for saturation was achieved for Normative beliefs but not for other beliefs or study-wise saturation. In Study 2 (prospective analysis), 17 relatives of people with Paget's disease of the bone reported 44 shared beliefs about taking genetic testing. Study-wise data saturation was achieved at interview 17. We propose specification of these principles for reporting data saturation in theory-based interview studies. The principles may be adaptable for other types of studies.
AMSTAR has good agreement, reliability, construct validity, and feasibility. These findings need confirmation by a broader range of assessors and a more diverse range of reviews.
BackgroundThere is little systematic operational guidance about how best to develop complex interventions to reduce the gap between practice and evidence. This article is one in a Series of articles documenting the development and use of the Theoretical Domains Framework (TDF) to advance the science of implementation research.MethodsThe intervention was developed considering three main components: theory, evidence, and practical issues. We used a four-step approach, consisting of guiding questions, to direct the choice of the most appropriate components of an implementation intervention: Who needs to do what, differently? Using a theoretical framework, which barriers and enablers need to be addressed? Which intervention components (behaviour change techniques and mode(s) of delivery) could overcome the modifiable barriers and enhance the enablers? And how can behaviour change be measured and understood?ResultsA complex implementation intervention was designed that aimed to improve acute low back pain management in primary care. We used the TDF to identify the barriers and enablers to the uptake of evidence into practice and to guide the choice of intervention components. These components were then combined into a cohesive intervention. The intervention was delivered via two facilitated interactive small group workshops. We also produced a DVD to distribute to all participants in the intervention group. We chose outcome measures in order to assess the mediating mechanisms of behaviour change.ConclusionsWe have illustrated a four-step systematic method for developing an intervention designed to change clinical practice based on a theoretical framework. The method of development provides a systematic framework that could be used by others developing complex implementation interventions. While this framework should be iteratively adjusted and refined to suit other contexts and settings, we believe that the four-step process should be maintained as the primary framework to guide researchers through a comprehensive intervention development process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.