BackgroundHigh-quality measurement is critical to advancing knowledge in any field. New fields, such as implementation science, are often beset with measurement gaps and poor quality instruments, a weakness that can be more easily addressed in light of systematic review findings. Although several reviews of quantitative instruments used in implementation science have been published, no studies have focused on instruments that measure implementation outcomes. Proctor and colleagues established a core set of implementation outcomes including: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res 36:24–34, 2009). The Society for Implementation Research Collaboration (SIRC) Instrument Review Project employed an enhanced systematic review methodology (Implement Sci 2: 2015) to identify quantitative instruments of implementation outcomes relevant to mental or behavioral health settings.MethodsFull details of the enhanced systematic review methodology are available (Implement Sci 2: 2015). To increase the feasibility of the review, and consistent with the scope of SIRC, only instruments that were applicable to mental or behavioral health were included. The review, synthesis, and evaluation included the following: (1) a search protocol for the literature review of constructs; (2) the literature review of instruments using Web of Science and PsycINFO; and (3) data extraction and instrument quality ratings to inform knowledge synthesis. Our evidence-based assessment rating criteria quantified fundamental psychometric properties as well as a crude measure of usability. Two independent raters applied the evidence-based assessment rating criteria to each instrument to generate a quality profile.ResultsWe identified 104 instruments across eight constructs, with nearly half (n = 50) assessing acceptability and 19 identified for adoption, with all other implementation outcomes revealing fewer than 10 instruments. Only one instrument demonstrated at least minimal evidence for psychometric strength on all six of the evidence-based assessment criteria. The majority of instruments had no information regarding responsiveness or predictive validity.ConclusionsImplementation outcomes instrumentation is underdeveloped with respect to both the sheer number of available instruments and the psychometric quality of existing instruments. Until psychometric strength is established, the field will struggle to identify which implementation strategies work best, for which organizations, and under what conditions.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-015-0342-x) contains supplementary material, which is available to authorized users.
BackgroundLike many new fields, implementation science has become vulnerable to instrumentation issues that potentially threaten the strength of the developing knowledge base. For instance, many implementation studies report findings based on instruments that do not have established psychometric properties. This article aims to review six pressing instrumentation issues, discuss the impact of these issues on the field, and provide practical recommendations.DiscussionThis debate centers on the impact of the following instrumentation issues: use of frameworks, theories, and models; role of psychometric properties; use of ‘home-grown’ and adapted instruments; choosing the most appropriate evaluation method and approach; practicality; and need for decision-making tools. Practical recommendations include: use of consensus definitions for key implementation constructs; reporting standards (e.g., regarding psychometrics, instrument adaptation); when to use multiple forms of observation and mixed methods; and accessing instrument repositories and decision aid tools.SummaryThis debate provides an overview of six key instrumentation issues and offers several courses of action to limit the impact of these issues on the field. With careful attention to these issues, the field of implementation science can potentially move forward at the rapid pace that is respectfully demanded by community stakeholders.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-014-0118-8) contains supplementary material, which is available to authorized users.
BackgroundIdentification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project’s objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field’s most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes.MethodsThe seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository.ResultsTo date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied.ConclusionsThe results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed.Electronic supplementary materialThe online version of this article (doi:10.1186/s13012-014-0193-x) contains supplementary material, which is available to authorized users.
BackgroundImplementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript.MethodsOur primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the “cited by” function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting “poor” to “excellent” evidence.DiscussionWe will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.Electronic supplementary materialThe online version of this article (10.1186/s13643-018-0728-3) contains supplementary material, which is available to authorized users.
Educators are increasingly being encouraged to implement evidence-based interventions and practices to address the social, emotional, and behavioral needs of young children who exhibit problem behavior in early childhood settings. Given the nature of social-emotional learning during the early childhood years and the lack of a common set of core evidence-based practices within the early childhood literature, selection of instructional practices that foster positive social, emotional, and behavioral outcomes for children in early childhood settings can be difficult. The purpose of this paper is to report findings from a study designed to identify common practice elements found in comprehensive intervention models (i.e., manualized interventions that include a number of components) or discrete practices (i.e., a specific behavior or action) designed to target social, emotional, and behavioral learning of young children who exhibit problem behavior. We conducted a systematic review of early childhood classroom interventions that had been evaluated in randomized group designs, quasi-experimental designs, and single-case experimental designs. A total of 49 published articles were identified, and an iterative process was used to identify common practice elements. The practice elements were subsequently reviewed by experts in social-emotional and behavioral interventions for young children. Twenty-four practice elements were identified and classified into content (the goal or general principle that guides a practice element) and delivery (the way in which a teacher provides instruction to the child) categories. We discuss implications that the identification of these practice elements found in the early childhood literature has for efforts to implement models and practices.
Objective: Treatment integrity, or the degree to which an intervention is delivered as intended, serves a crucial function as an independent variable check in treatment outcome research. Implementation science focuses on understanding and improving the processes (e.g., training, supervision, monitoring) that establish and support treatment integrity in community settings. This review assessed the adequacy of treatment integrity procedures (i.e., establishing, assessing, evaluating, and reporting integrity) implemented in treatment outcome research with the goals of updating the review by Perepletchikova, Treat, and Kazdin (2007) and connecting findings to implementation science goals. Method: Using the Implementation of Treatment Integrity Procedures Scale (Perepletchikova et al., 2007), 2 trained raters coded the treatment integrity procedures described by randomized controlled trials of psychosocial interventions published in 6 high-impact-factor journals from 2011 to 2015 (N ϭ 188 studies describing 270 treatments). Results: Compared with Perepletchikova et al., current findings indicate significant improvement, but the frequency of adequate treatment integrity implementation remains low (10.7%). Conclusions: Recommendations for future work include focus on conceptualization of treatment integrity, establishment of treatment integrity standards, and use of findings from implementation science to improve treatment integrity procedures. What is the public health significance of this article?This review found that the current state of treatment integrity in treatment outcome research has improved slightly in the past decade but remains largely inadequate. Inadequate treatment integrity undermines confidence in the findings from treatment outcome research, regardless of setting. Advances in implementation science may bolster treatment integrity procedures.
Introduction: Studies have found that psychological treatments produce positive clinical outcomes for many problems experienced by youth. However, there is limited research on whether therapist adherence and competence in delivering these treatments are related to differential clinical outcomes. Method: We examined the relationship of therapist adherence and competence to clinical outcomes in a sample of 51 youth aged 7-14 years (M age = 10.36, SD = 1.90; 86.3% white; 60.8% male) treated for anxiety disorders with a manualized individual cognitive-behavioral therapy. Adherence and competence were measured via coding of recorded treatment session content and outcomes were measured by caregiver and youth report across multiple timepoints. We used two-level mixed-effects regression models to test the degree to which adherence and competence predicted differential youth clinical outcomes. Results: Across multiple caregiver-and child-reported symptom and diagnostic outcomes, we found no statistically significant relationship between adherence or competence and clinical outcomes. Discussion: Although there was variability in both treatment integrity and clinical outcome, neither adherence to nor competence in youth anxiety treatment was related to clinical outcomes for youth with anxiety disorders treated with individual cognitive-behavioral treatment (CBT) in a research clinic-based efficacy trial. Public Health StatementResults suggest the possibility that symptom reduction experienced by clients from cognitive-behavioral treatment (CBT) for child anxiety may not be related to the degree to which therapists followed the CBT treatment procedures closely or competently.
The Catania VIS 2.46 GHz source has been installed on a test stand at the Best Cyclotron Systems, in Vancouver, Canada, as part of the DAEδALUS and IsoDAR R&D program. Studies to date include optimization for H2 (+)/p ratio and emittance measurements. Inflection, capture, and acceleration tests will be conducted when a small test cyclotron is completed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.