BackgroundImplementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript.MethodsOur primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the “cited by” function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting “poor” to “excellent” evidence.DiscussionWe will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.Electronic supplementary materialThe online version of this article (10.1186/s13643-018-0728-3) contains supplementary material, which is available to authorized users.
The use of reliable, valid measures in implementation practice will remain limited without pragmatic measures. Previous research identified the need for pragmatic measures, though the characteristic identification used only expert opinion and literature review. Our team completed four studies to develop a stakeholder-driven pragmatic rating criteria for implementation measures. We published Studies 1 (identifying dimensions of the pragmatic construct) and 2 (clarifying the internal structure) that engaged stakeholders—participants in mental health provider and implementation settings—to identify 17 terms/phrases across four categories: Useful, Compatible, Acceptable, and Easy. This paper presents Studies 3 and 4: a Delphi to ascertain stakeholder-prioritized dimensions within a mental health context, and a pilot study applying the rating criteria. Stakeholders (N = 26) participated in a Delphi and rated the relevance of 17 terms/phrases to the pragmatic construct. The investigator team further defined and shortened the list, which were piloted with 60 implementation measures. The Delphi confirmed the importance of all pragmatic criteria, but provided little guidance on relative importance. The investigators removed or combined terms/phrases to obtain 11 criteria. The 6-point rating system assigned to each criterion demonstrated sufficient variability across items. The grey literature did not add critical information. This work produced the first stakeholder-driven rating criteria to assess whether measures are pragmatic. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) combines the pragmatic criteria with psychometric rating criteria, from previous work. Use of PAPERS can inform development of implementation measures and to assess the quality of existing measures.
Background Public policy has tremendous impacts on population health. While policy development has been extensively studied, policy implementation research is newer and relies largely on qualitative methods. Quantitative measures are needed to disentangle differential impacts of policy implementation determinants (i.e., barriers and facilitators) and outcomes to ensure intended benefits are realized. Implementation outcomes include acceptability, adoption, appropriateness, compliance/fidelity, feasibility, penetration, sustainability, and costs. This systematic review identified quantitative measures that are used to assess health policy implementation determinants and outcomes and evaluated the quality of these measures. Methods Three frameworks guided the review: Implementation Outcomes Framework (Proctor et al.), Consolidated Framework for Implementation Research (Damschroder et al.), and Policy Implementation Determinants Framework (Bullock et al.). Six databases were searched: Medline, CINAHL Plus, PsycInfo, PAIS, ERIC, and Worldwide Political. Searches were limited to English language, peer-reviewed journal articles published January 1995 to April 2019. Search terms addressed four levels: health, public policy, implementation, and measurement. Empirical studies of public policies addressing physical or behavioral health with quantitative self-report or archival measures of policy implementation with at least two items assessing implementation outcomes or determinants were included. Consensus scoring of the Psychometric and Pragmatic Evidence Rating Scale assessed the quality of measures. Results Database searches yielded 8417 non-duplicate studies, with 870 (10.3%) undergoing full-text screening, yielding 66 studies. From the included studies, 70 unique measures were identified to quantitatively assess implementation outcomes and/or determinants. Acceptability, feasibility, appropriateness, and compliance were the most commonly measured implementation outcomes. Common determinants in the identified measures were organizational culture, implementation climate, and readiness for implementation, each aspects of the internal setting. Pragmatic quality ranged from adequate to good, with most measures freely available, brief, and at high school reading level. Few psychometric properties were reported. Conclusions Well-tested quantitative measures of implementation internal settings were under-utilized in policy studies. Further development and testing of external context measures are warranted. This review is intended to stimulate measure development and high-quality assessment of health policy implementation outcomes and determinants to help practitioners and researchers spread evidence-informed policies to improve population health. Registration Not registered
Context: Health systems increasingly are exploring implementation of standardized social risk assessments. Implementation requires screening tools both with evidence of validity and reliability (psychometric properties) and that are low cost, easy to administer, readable, and brief (pragmatic properties). These properties for social risk assessment tools are not well understood and could help guide selection of assessment tools and future research. Evidence acquisition: The systematic review was conducted during 2018 and included literature from PubMed and CINAHL published between 2000 and May 18, 2018. Included studies were based in the U.S., included tools that addressed at least 2 social risk factors (economic stability, education, social and community context, healthcare access, neighborhood and physical environment, or food), and were administered in a clinical setting. Manual literature searching was used to identify empirical uses of included screening tools. Data on psychometric and pragmatic properties of each tool were abstracted. Evidence synthesis: Review of 6,838 unique citations yielded 21 unique screening tools and 60 articles demonstrating empirical uses of the included screening tools. Data on psychometric properties were sparse, and few tools reported use of gold standard measurement development methods. Review of pragmatic properties indicated that tools were generally low cost, written for low-literacy populations, and easy to administer. Conclusions: Multiple low-cost, low literacy tools are available for social risk screening in clinical settings, but psychometric data are very limited. More research is needed on clinic-based screening tool reliability and validity as these factors should influence both adoption and utility.
Background: Systematic reviews of measures can facilitate advances in implementation research and practice by locating reliable and valid measures and highlighting measurement gaps. Our team completed a systematic review of implementation outcome measures published in 2015 that indicated a severe measurement gap in the field. Now, we offer an update with this enhanced systematic review to identify and evaluate the psychometric properties of measures of eight implementation outcomes used in behavioral health care. Methods: The systematic review methodology is described in detail in a previously published protocol paper and summarized here. The review proceeded in three phases. Phase I, data collection, involved search string generation, title and abstract screening, full text review, construct assignment, and measure forward searches. Phase II, data extraction, involved coding psychometric information. Phase III, data analysis, involved two trained specialists independently rating each measure using PAPERS (Psychometric And Pragmatic Evidence Rating Scales). Results: Searches identified 150 outcomes measures of which 48 were deemed unsuitable for rating and thus excluded, leaving 102 measures for review. We identified measures of acceptability ( N = 32), adoption ( N = 26), appropriateness ( N = 6), cost ( N = 31), feasibility ( N = 18), fidelity ( N = 18), penetration ( N = 23), and sustainability ( N = 14). Information about internal consistency and norms were available for most measures (59%). Information about other psychometric properties was often not available. Ratings for internal consistency and norms ranged from “adequate” to “excellent.” Ratings for other psychometric properties ranged mostly from “poor” to “good.” Conclusion: While measures of implementation outcomes used in behavioral health care (including mental health, substance use, and other addictive behaviors) are unevenly distributed and exhibit mostly unknown psychometric quality, the data reported in this article show an overall improvement in availability of psychometric information. This review identified a few promising measures, but targeted efforts are needed to systematically develop and test measures that are useful for both research and practice. Plain language abstract: When implementing an evidence-based treatment into practice, it is important to assess several outcomes to gauge how effectively it is being implemented. Outcomes such as acceptability, feasibility, and appropriateness may offer insight into why providers do not adopt a new treatment. Similarly, outcomes such as fidelity and penetration may provide important context for why a new treatment did not achieve desired effects. It is important that methods to measure these outcomes are accurate and consistent. Without accurate and consistent measurement, high-quality evaluations cannot be conducted. This systematic review of published studies sought to identify questionnaires (referred to as measures) that ask staff at various levels (e.g., providers, supervisors) questions related to implementation outcomes, and to evaluate the quality of these measures. We identified 150 measures and rated the quality of their evidence with the goal of recommending the best measures for future use. Our findings suggest that a great deal of work is needed to generate evidence for existing measures or build new measures to achieve confidence in our implementation evaluations.
To rigorously measure the implementation of evidence-based interventions, implementation science requires measures that have evidence of reliability and validity across different contexts and populations. Measures that can detect change over time and impact on outcomes of interest are most useful to implementers. Moreover, measures that fit the practical needs of implementers could be used to guide implementation outside of the research context. To address this need, our team developed a rating scale for implementation science measures that considers their psychometric and pragmatic properties and the evidence available. The Psychometric and Pragmatic Evidence Rating Scale (PAPERS) can be used in systematic reviews of measures, in measure development, and to select measures. PAPERS may move the field toward measures that inform robust research evaluations and practical implementation efforts.
Background: Many health systems invest in initiatives to accelerate translation of knowledge into practice. However, organizations lack guidance on how to develop and operationalize such Learning Health System (LHS) programs and evaluate their impact. Kaiser Permanente Washington (KPWA) launched our LHS program in June 2017 and developed a logic model as a foundation to evaluate the program's impact.Objective: To develop a roadmap for organizations that want to establish an LHS program, understand how LHS core components relate to one another when operationalized in practice, and evaluate and improve their progress. Methods:We conducted a narrative review on LHS models, key model components, and measurement approaches. Results:The KPWA LHS Logic Model provides a broad set of constructs relevant to LHS programs, depicts their relationship to LHS operations, harmonizes terms across models, and offers measurable operationalizations of each construct to guide other health systems. The model identifies essential LHS inputs, provides transparency into LHS activities, and defines key outcomes to evaluate LHS processes and impact. We provide reflections on the most helpful components of the model and identify areas that need further improvement using illustrative examples from deployment of the LHS model during the COVID-19 pandemic. Conclusion:The KPWA LHS Logic Model is a starting point for future LHS implementation research and a practical guide for healthcare organizations that are building, operationalizing, and evaluating LHS initiatives.
Introduction: Older adults, who already have higher levels of social isolation, loneliness, and sedentary behavior, are particularly susceptible to negative impacts from social distancing mandates meant to control the spread of COVID-19. We sought to explore the physical, mental, and social health impacts of the pandemic on older adults and their coping techniques.Materials and Methods: We conducted 25 semi-structured interviews with a sub-sample of participants in an ongoing sedentary behavior reduction intervention. Interviews were recorded and transcribed, and iterative coding was used to extract key themes.Results: Most participants reported an increase in sedentary behavior due to limitations on leaving their home and increased free time to pursue seated hobbies (e.g., reading, knitting, tv). However, many participants also reported increased levels of intentional physical activity and exercise, particularly outdoors or online. Participants also reported high levels of stress and a large decrease in in-person social connection. Virtual connection with others through phone and video was commonly used to stay connected with friends and family, engage in community groups and activities, and cope with stress and social isolation. Maintenance of a positive attitude and perspective gained from past hardships was also an important coping strategy for many participants.Discussion: The COVID-19 pandemic and associated social distancing measures have impacted older adults' perceived levels of activity, stress, and social isolation, but many leveraged technology and prior life experiences to cope. These themes could inform future interventions for older adults dealing with chronic stress and isolation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.