Implementation science has great potential to improve the health of communities and individuals who are not achieving health equity. However, implementation science can exacerbate health disparities if its use is biased toward entities that already have the highest capacities for delivering evidence-based interventions. In this article, we examine several methodologic approaches for conducting implementation research to advance equity both in our understanding of what historically disadvantaged populations would need—what we call scientific equity—and how this knowledge can be applied to produce health equity. We focus on rapid ways to gain knowledge on how to engage, design research, act, share, and sustain successes in partnership with communities. We begin by describing a principle-driven partnership process between community members and implementation researchers to overcome disparities. We then review three innovative implementation method paradigms to improve scientific and health equity and provide examples of each. The first paradigm involves making efficient use of existing data by applying epidemiologic and simulation modeling to understand what drives disparities and how they can be overcome. The second paradigm involves designing new research studies that include, but do not focus exclusively on, populations experiencing disparities in health domains such as cardiovascular disease and co-occurring mental health conditions. The third paradigm involves implementation research that focuses exclusively on populations who have experienced high levels of disparities. To date, our scientific enterprise has invested disproportionately in research that fails to eliminate health disparities. The implementation research methods discussed here hold promise for overcoming barriers and achieving health equity.Ethn Dis. 2019;29(Suppl 1):83-92; doi:10.18865/ed.29.S1.83.
A wide variety of dissemination and implementation designs are now being used to evaluate and improve health systems and outcomes. This chapter discusses randomized and nonrandomized designs for the traditional translational research continuum or pipeline, which builds on existing efficacy and effectiveness trials to examine how one or more evidence-based clinical/prevention interventions are adopted, scaled up, and sustained in community or service delivery systems. The chapter also considers other designs, including hybrid designs that combine effectiveness and implementation research, and designs that use simulation modeling. A case example of a recent large-scale implementation study is presented as an example of measurement and design considerations in dissemination and implementation research. The chapter provides suggested readings and websites useful for design decisions.
High-fidelity models are increasingly used to predict, and guide decision making. Prior work has emphasized the importance of replication in ensuring reliable modeling, and has yielded important replication strategies. However, this work is based on relatively simple theory generating models, and its lessons might not translate to high-fidelity models used for decision support. Using NetLogo we replicate a recently published high-fidelity model examining the e ects of a HIV biomedical intervention. We use a modular approach to build our model from the ground up, and provide examples of the replication process investigating the replication of two sub-modules as well as the overall simulation experiment. For the first module, we achieved numerical identity during replication, whereas we obtained distributional equivalence in replicating the second module. We achieved relational equivalence among the overall model behaviors, with a . correlation across the two implementations for our outcome measure even without strictly following the original model in the formation of the sexual network. Our results show that replication of high-fidelity models is feasible when following a set of systematic strategies that leverage the modularity, and highlight the role of replication standards, modular testing, and functional code in facilitating such strategies.
BackgroundTo improve the quality, quantity, and speed of implementation, careful monitoring of the implementation process is required. However, some health organizations have such limited capacity to collect, organize, and synthesize information relevant to its decision to implement an evidence-based program, the preparation steps necessary for successful program adoption, the fidelity of program delivery, and the sustainment of this program over time. When a large health system implements an evidence-based program across multiple sites, a trained intermediary or broker may provide such monitoring and feedback, but this task is labor intensive and not easily scaled up for large numbers of sites.We present a novel approach to producing an automated system of monitoring implementation stage entrances and exits based on a computational analysis of communication log notes generated by implementation brokers. Potentially discriminating keywords are identified using the definitions of the stages and experts’ coding of a portion of the log notes. A machine learning algorithm produces a decision rule to classify remaining, unclassified log notes.ResultsWe applied this procedure to log notes in the implementation trial of multidimensional treatment foster care in the California 40-county implementation trial (CAL-40) project, using the stages of implementation completion (SIC) measure. We found that a semi-supervised non-negative matrix factorization method accurately identified most stage transitions. Another computational model was built for determining the start and the end of each stage.ConclusionsThis automated system demonstrated feasibility in this proof of concept challenge. We provide suggestions on how such a system can be used to improve the speed, quality, quantity, and sustainment of implementation. The innovative methods presented here are not intended to replace the expertise and judgement of an expert rater already in place. Rather, these can be used when human monitoring and feedback is too expensive to use or maintain. These methods rely on digitized text that already exists or can be collected with minimal to no intrusiveness and can signal when additional attention or remediation is required during implementation. Thus, resources can be allocated according to need rather than universally applied, or worse, not applied at all due to their cost.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.