BackgroundThe science of implementation has offered little toward understanding how different implementation strategies work. To improve outcomes of implementation efforts, the field needs precise, testable theories that describe the causal pathways through which implementation strategies function. In this perspective piece, we describe a four-step approach to developing causal pathway models for implementation strategies.Building causal modelsFirst, it is important to ensure that implementation strategies are appropriately specified. Some strategies in published compilations are well defined but may not be specified in terms of its core component that can have a reliable and measureable impact. Second, linkages between strategies and mechanisms need to be generated. Existing compilations do not offer mechanisms by which strategies act, or the processes or events through which an implementation strategy operates to affect desired implementation outcomes. Third, it is critical to identify proximal and distal outcomes the strategy is theorized to impact, with the former being direct, measurable products of the strategy and the latter being one of eight implementation outcomes (1). Finally, articulating effect modifiers, like preconditions and moderators, allow for an understanding of where, when, and why strategies have an effect on outcomes of interest.Future directionsWe argue for greater precision in use of terms for factors implicated in implementation processes; development of guidelines for selecting research design and study plans that account for practical constructs and allow for the study of mechanisms; psychometrically strong and pragmatic measures of mechanisms; and more robust curation of evidence for knowledge transfer and use.
Conflict of Interest Disclosures: Dr Douglas reported receipt of compensation related to the Peabody Treatment Progress Battery and a financial relationship with Mirah; there is a management plan to ensure that this conflict does not jeopardize the objectivity of her research. No other disclosures were reported.
The current paper articulates how common difficulties encountered when attempting to implement or scale-up evidence-based treatments are exacerbated by fundamental design problems, which may be addressed by a set of principles and methods drawn from the contemporary field of user-centered design. User-centered design is an approach to product development that grounds the process in information collected about the individuals and settings where products will ultimately be used. To demonstrate the utility of this perspective, we present four design concepts and methods: (a) clear identification of end users and their needs, (b) prototyping/rapid iteration, (c) simplifying existing intervention parameters/procedures, and (d) exploiting natural constraints. We conclude with a brief design-focused research agenda for the developers and implementers of evidence-based treatments.
Mental health problems are common and pose a tremendous societal burden in terms of cost, morbidity, quality of life, and mortality. The great majority of people experience barriers that prevent access to treatment, aggravated by a lack of mental health specialists. Digital mental health is potentially useful in meeting the treatment needs of large numbers of people. A growing number of efficacy trials have shown strong outcomes for digital mental health treatments. Yet despite their positive findings, there are very few examples of successful implementations and many failures. Although the research-to-practice gap is not unique to digital mental health, the inclusion of technology poses unique challenges. We outline some of the reasons for this gap and propose a collection of methods that can result in sustainable digital mental health interventions. These methods draw from human-computer interaction and implementation science and are integrated into an Accelerated Creation-to-Sustainment (ACTS) model. The ACTS model uses an iterative process that includes 2 basic functions (design and evaluate) across 3 general phases (Create, Trial, and Sustain). The ultimate goal in using the ACTS model is to produce a functioning technology-enabled service (TES) that is sustainable in a real-world treatment setting. We emphasize the importance of the service component because evidence from both research and practice has suggested that human touch is a critical ingredient in the most efficacious and used digital mental health treatments. The Create phase results in at least a minimally viable TES and an implementation blueprint. The Trial phase requires evaluation of both effectiveness and implementation while allowing optimization and continuous quality improvement of the TES and implementation plan. Finally, the Sustainment phase involves the withdrawal of research or donor support, while leaving a functioning, continuously improving TES in place. The ACTS model is a step toward bringing implementation and sustainment into the design and evaluation of TESs, public health into clinical research, research into clinics, and treatment into the lives of our patients.
Strategies specifically designed to facilitate the training of mental health practitioners in evidencebased practices (EBPs) have lagged behind the development of the interventions themselves. The current paper draws from an interdisciplinary literature (including medical training, adult education, and teacher training) to identify useful training and support approaches as well as important conceptual frameworks that may be applied to training in mental health. Theory and research findings are reviewed, which highlight the importance of continued consultation/ support following training workshops, congruence between the training content and practitioner experience, and focus on motivational issues. In addition, six individual approaches are presented with careful attention to their empirical foundations and potential applications. Common techniques are highlighted and applications and future directions for mental health workforce training and research are discussed. KeywordsTraining; Uptake; Workforce development; Implementation; Dissemination Over the past decades, the mental health field has seen a surge in the development and testing of evidence-based practices (EBPs) for the treatment of a wide variety of adult and youth psychosocial problems. Unfortunately, the advances in EBPs have largely outpaced the development of technologies designed to support their implementation by practitioners in real-world contexts (Fixsen et al. 2005;Ganju 2003;Gotham 2006). One result of this lag is a shortage of treatment providers who are adequately trained and supported to provide EBPs (Kazdin 2008;Weissman et al. 2006). Although reviews of implementation science identify practitioner training as a core implementation component (e.g., Fixsen et al. 2005), research has been limited and trainers in behavioral health repeatedly fail to make use of the existing strategies that have received empirical support (Stuart et al. 2004). The general lack of attention to evidence-based training and implementation methods has been cited as a major contributor to the "research-to-practice gap" commonly described in the mental health literature (Kazdin 2008;McHugh and Barlow 2010;Wandersman et al. 2008). NIH Public Access Author ManuscriptAdm Policy Ment Health. Author manuscript; available in PMC 2012 July 1. NIH-PA Author ManuscriptConsequently, efforts to develop or identify the most effective methods and strategies for training existing mental health practitioners in EBPs and/or the core skills underlying many EBPs (e.g., cognitive behavioral strategies, behavioral parenting strategies) have received increasing attention in the literature (e.g., Dimeff et al. 2009;Long 2008;Stirman et al. 2010).Not surprisingly, a disconnect between the scientific literature and the behavior of community professionals is not unique to mental health. Multiple disciplines, including the fields of medicine and education, grapple with how to train their workforces to implement practices that have received empirical support (e.g., Grimshaw et al. ...
BackgroundA substantial literature has established the role of the inner organizational setting on the implementation of evidence-based practices in community contexts, but very little of this research has been extended to the education sector, one of the most common settings for the delivery of mental and behavioral health services to children and adolescents. The current study examined the factor structure, psychometric properties, and interrelations of an adapted set of pragmatic organizational instruments measuring key aspects of the organizational implementation context in schools: (1) strategic implementation leadership, (2) strategic implementation climate, and (3) implementation citizenship behavior.MethodThe Implementation Leadership Scale (ILS), Implementation Climate Scale (ICS), and Implementation Citizenship Behavior Scale (ICBS) were adapted by a research team that included the original scale authors and experts in the implementation of evidence-based practices in schools. These instruments were then administered to a geographically representative sample (n = 196) of school-based mental/behavioral health consultants to assess the reliability and structural validity via a series of confirmatory factor analyses.ResultsOverall, the original factor structures for the ILS, ICS, and ICBS were confirmed in the current sample. The one exception was poor functioning of the Rewards subscale of the ICS, which was removed in the final ICS model. Correlations among the revised measures, evaluated as part of an overarching model of the organizational implementation context, indicated both unique and shared variance.ConclusionsThe current analyses suggest strong applicability of the revised instruments to implementation of evidence-based mental and behavioral practices in the education sector. The one poorly functioning subscale (Rewards on the ICS) was attributed to typical educational policies that do not allow for individual financial incentives to personnel. Potential directions for future expansion, revision, and application of the instruments in schools are discussed.
Background: Understanding the mechanisms of implementation strategies (i.e., the processes by which strategies produce desired effects) is important for research to understand why a strategy did or did not achieve its intended effect, and it is important for practice to ensure strategies are designed and selected to directly target determinants or barriers. This study is a systematic review to characterize how mechanisms are conceptualized and measured, how they are studied and evaluated, and how much evidence exists for specific mechanisms. Methods: We systematically searched PubMed and CINAHL Plus for implementation studies published between January 1990 and August 2018 that included the terms "mechanism," "mediator," or "moderator." Two authors independently reviewed title and abstracts and then full texts for fit with our inclusion criteria of empirical studies of implementation in health care contexts. Authors extracted data regarding general study information, methods, results, and study design and mechanisms-specific information. Authors used the Mixed Methods Appraisal Tool to assess study quality. Results: Search strategies produced 2277 articles, of which 183 were included for full text review. From these we included for data extraction 39 articles plus an additional seven articles were hand-entered from only other review of implementation mechanisms (total = 46 included articles). Most included studies employed quantitative methods (73.9%), while 10.9% were qualitative and 15.2% were mixed methods. Nine unique versions of models testing mechanisms emerged. Fifty-three percent of the studies met half or fewer of the quality indicators. The majority of studies (84.8%) only met three or fewer of the seven criteria stipulated for establishing mechanisms. Conclusions: Researchers have undertaken a multitude of approaches to pursue mechanistic implementation research, but our review revealed substantive conceptual, methodological, and measurement issues that must be addressed in order to advance this critical research agenda. To move the field forward, there is need for greater precision to achieve conceptual clarity, attempts to generate testable hypotheses about how and why variables are related, and use of concrete behavioral indicators of proximal outcomes in the case of quantitative research and more directed inquiry in the case of qualitative research.
Numerous trials demonstrate that monitoring client progress and using feedback for clinical decision-making enhances treatment outcomes, but available data suggest these practices are rare in clinical settings and no psychometrically validated measures exist for assessing attitudinal barriers to these practices. This national survey of 504 clinicians collected data on attitudes toward and use of monitoring and feedback. Two new measures were developed and subjected to factor analysis: The monitoring and feedback attitudes scale (MFA), measuring general attitudes toward monitoring and feedback, and the attitudes toward standardized assessment scales-monitoring and feedback (ASA-MF), measuring attitudes toward standardized progress tools. Both measures showed good fit to their final factor solutions, with excellent internal consistency for all subscales. Scores on the MFA subscales (Benefit, Harm) indicated that clinicians hold generally positive attitudes toward monitoring and feedback, but scores on the ASA-MF subscales (Clinical Utility, Treatment Planning, Practicality) were relatively neutral. Providers with cognitive-behavioral theoretical orientations held more positive attitudes. Only 13.9 % of clinicians reported using standardized progress measures at least monthly and 61.5 % never used them. Providers with more positive attitudes reported higher use, providing initial support for the predictive validity of the ASA-MF and MFA. Thus, while clinicians report generally positive attitudes toward monitoring and feedback, routine collection of standardized progress measures remains uncommon. Implications for the dissemination and implementation of monitoring and feedback systems are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.