Background: Improving the speed of systematic review (SR) development is key to supporting evidence-based medicine. Machine learning tools which semi-automate citation screening might improve efficiency. Few studies have assessed use of screening prioritization functionality or compared two tools head to head. In this project, we compared performance of two machine-learning tools for potential use in citation screening. Methods: Using 9 evidence reports previously completed by the ECRI Institute Evidence-based Practice Center team, we compared performance of Abstrackr and EPPI-Reviewer, two off-the-shelf citations screening tools, for identifying relevant citations. Screening prioritization functionality was tested for 3 large reports and 6 small reports on a range of clinical topics. Large report topics were imaging for pancreatic cancer, indoor allergen reduction, and inguinal hernia repair. We trained Abstrackr and EPPI-Reviewer and screened all citations in 10% increments. In Task 1, we inputted whether an abstract was ordered for full-text screening; in Task 2, we inputted whether an abstract was included in the final report. For both tasks, screening continued until all studies ordered and included for the actual reports were identified. We assessed potential reductions in hypothetical screening burden (proportion of citations screened to identify all included studies) offered by each tool for all 9 reports. Results: For the 3 large reports, both EPPI-Reviewer and Abstrackr performed well with potential reductions in screening burden of 4 to 49% (Abstrackr) and 9 to 60% (EPPI-Reviewer). Both tools had markedly poorer performance for 1 large report (inguinal hernia), possibly due to its heterogeneous key questions. Based on McNemar's test for paired proportions in the 3 large reports, EPPI-Reviewer outperformed Abstrackr for identifying articles ordered for full-text review, but Abstrackr performed better in 2 of 3 reports for identifying articles included in the final report. For small reports, both tools provided benefits but EPPI-Reviewer generally outperformed Abstrackr in both tasks, although these results were often not statistically significant. Conclusions: Abstrackr and EPPI-Reviewer performed well, but prioritization accuracy varied greatly across reports. Our work suggests screening prioritization functionality is a promising modality offering efficiency gains without giving up human involvement in the screening process.
Background: Pediatric lead exposure in the United States (U.S.) remains a preventable public health crisis. Shareable electronic clinical decision support (CDS) could improve lead screening and management. However, discrepancies between federal, state and local recommendations could present significant challenges for implementation. Methods: We identified publically available guidance on lead screening and management. We extracted definitions for elevated lead and recommendations for screening, follow-up, reporting, and management. We compared thresholds and level of obligation for management actions. Finally, we assessed the feasibility of development of shareable CDS. Results: We identified 54 guidance sources. States offered different definitions of elevated lead, and recommendations for screening, reporting, follow-up and management. Only 37 of 48 states providing guidance used the Center for Disease Control (CDC) definition for elevated lead. There were 17 distinct management actions. Guidance sources indicated an average of 5.5 management actions, but offered different criteria and levels of obligation for these actions. Despite differences, the recommendations were well-structured, actionable, and encodable, indicating shareable CDS is feasible. Conclusion: Current variability across guidance poses challenges for clinicians. Developing shareable CDS is feasible and could improve pediatric lead screening and management. Shareable CDS would need to account for local variability in guidance.
Background and Significance Quality measurement can drive improvement in clinical care and allow for easy reporting of quality care by clinicians, but creating quality measures is a time-consuming and costly process. ECRI (formerly Emergency Care Research Institute) has pioneered a process to support systematic translation of clinical practice guidelines into electronic quality measures using a transparent and reproducible pathway. This process could be used to augment or support the development of electronic quality measures of the American Academy of Otolaryngology–Head and Neck Surgery Foundation (AAO-HNSF) and others as the Centers for Medicare and Medicaid Services transitions from the Merit-Based Incentive Payment System (MIPS) to the MIPS Value Pathways for quality reporting. Methods We used a transparent and reproducible process to create electronic quality measures based on recommendations from 2 AAO-HNSF clinical practice guidelines (cerumen impaction and allergic rhinitis). Steps of this process include source material review, electronic content extraction, logic development, implementation barrier analysis, content encoding and structuring, and measure formalization. Proposed measures then go through the standard publication process for AAO-HNSF measures. Results The 2 guidelines contained 29 recommendation statements, of which 7 were translated into electronic quality measures and published. Intermediate products of the guideline conversion process facilitated development and were retained to support review, updating, and transparency. Of the 7 initially published quality measures, 6 were approved as 2018 MIPS measures, and 2 continued to demonstrate a gap in care after a year of data collection. Conclusion Developing high-quality, registry-enabled measures from guidelines via a rigorous reproducible process is feasible. The streamlined process was effective in producing quality measures for publication in a timely fashion. Efforts to better identify gaps in care and more quickly recognize recommendations that would not translate well into quality measures could further streamline this process.
Background: In an era of explosive growth in biomedical evidence, improving systematic review (SR) search processes is increasingly critical. Text-mining tools (TMTs) are a potentially powerful resource to improve and streamline search strategy development. Two types of TMTs are especially of interest to searchers: word frequency (useful for identifying most used keyword terms, e.g., PubReminer) and clustering (visualizing common themes, e.g., Carrot2). Objectives: The objectives of this study were to compare the benefits and trade-offs of searches with and without the use of TMTs for evidence synthesis products in real world settings. Specific questions included: (1) Do TMTs decrease the time spent developing search strategies? (2) How do TMTs affect the sensitivity and yield of searches? (3) Do TMTs identify groups of records that can be safely excluded in the search evaluation step? (4) Does the complexity of a systematic review topic affect TMT performance? In addition to quantitative data, we collected librarians' comments on their experiences using TMTs to explore when and how these new tools may be useful in systematic review search¬¬ creation. Methods: In this prospective comparative study, we included seven SR projects, and classified them into simple or complex topics. The project librarian used conventional “usual practice” (UP) methods to create the MEDLINE search strategy, while a paired TMT librarian simultaneously and independently created a search strategy using a variety of TMTs. TMT librarians could choose one or more freely available TMTs per category from a pre-selected list in each of three categories: (1) keyword/phrase tools: AntConc, PubReMiner; (2) subject term tools: MeSH on Demand, PubReMiner, Yale MeSH Analyzer; and (3) strategy evaluation tools: Carrot2, VOSviewer. We collected results from both MEDLINE searches (with and without TMTs), coded every citation’s origin (UP or TMT respectively), deduplicated them, and then sent the citation library to the review team for screening. When the draft report was submitted, we used the final list of included citations to calculate the sensitivity, precision, and number-needed-to-read for each search (with and without TMTs). Separately, we tracked the time spent on various aspects of search creation by each librarian. Simple and complex topics were analyzed separately to provide insight into whether TMTs could be more useful for one type of topic or another. Results: Across all reviews, UP searches seemed to perform better than TMT, but because of the small sample size, none of these differences was statistically significant. UP searches were slightly more sensitive (92% [95% confidence intervals (CI) 85–99%]) than TMT searches (84.9% [95% CI 74.4–95.4%]). The mean number-needed-to-read was 83 (SD 34) for UP and 90 (SD 68) for TMT. Keyword and subject term development using TMTs generally took less time than those developed using UP alone. The average total time was 12 hours (SD 8) to create a complete search strategy by UP librarians, and 5 hours (SD 2) for the TMT librarians. TMTs neither affected search evaluation time nor improved identification of exclusion concepts (irrelevant records) that can be safely removed from the search set. Conclusion: Across all reviews but one, TMT searches were less sensitive than UP searches. For simple SR topics (i.e., single indication–single drug), TMT searches were slightly less sensitive, but reduced time spent in search design. For complex SR topics (e.g., multicomponent interventions), TMT searches were less sensitive than UP searches; nevertheless, in complex reviews, they identified unique eligible citations not found by the UP searches. TMT searches also reduced time spent in search strategy development. For all evidence synthesis types, TMT searches may be more efficient in reviews where comprehensiveness is not paramount, or as an adjunct to UP for evidence syntheses, because they can identify unique includable citations. If TMTs were easier to learn and use, their utility would be increased.
Objective The Patient-Centered Outcomes Research Institute (PCORI) horizon scanning system is an early warning system for healthcare interventions in development that could disrupt standard care. We report preliminary findings from the patient engagement process. Methods The system involves broadly scanning many resources to identify and monitor interventions up to 3 years before anticipated entry into U.S. health care. Topic profiles are written on included interventions with late-phase trial data and circulated with a structured review form for stakeholder comment to determine disruption potential. Stakeholders include patients and caregivers recruited from credible community sources. They view an orientation video, comment on topic profiles, and take a survey about their experience. Results As of March 2020, 312 monitored topics (some of which were archived) were derived from 3,500 information leads; 121 met the criteria for topic profile development and stakeholder comment. We invited fifty-four patients and caregivers to participate; thirty-nine reviewed at least one report. Their perspectives informed analyst nominations for fourteen topics in two 2019 High Potential Disruption Reports. Thirty-four patient stakeholders completed the user-experience survey. Most agreed (68 percent) or somewhat agreed (26 percent) that they were confident they could provide useful comments. Ninety-four percent would recommend others to participate. Conclusions The system has successfully engaged patients and caregivers, who contributed unique and important perspectives that informed the selection of topics deemed to have high potential to disrupt clinical care. Most participants would recommend others to participate in this process. More research is needed to inform optimal patient and caregiver stakeholder recruitment and engagement methods and reduce barriers to participation.
Health technology assessments represent comprehensive summaries of available evidence and information on a technology. They are used by medical decision makers in a variety of ways, including diagnostic testing, treatment selection, care management, patient perspectives, patient safety, insurance coverage, pharmaceutical innovation, equipment planning, device purchasing, and total cost-of-care. Electronic clinical data, which are captured routinely by clinicians and hospitals, are only rarely incorporated into formal health technology assessments. This disconnect reveals a key opportunity. In this paper, we discuss current uses of electronic clinical data, several benefits of including it in health technology assessments, potential pitfalls of that inclusion, and the implications for better medical decisions.
Description of Best PracticeWe used the ADAPTE method to develop a care protocol for major depression in primary care tailored for the local context, with a consideration of the organisation of health care services in primary care. The work was monitored by an expert committee composed of mental health specialists, general practitioners, health care administrators and decision-makers at regional and provincial levels. The care protocol is based on two clinical practice guidelines: the NICE guideline on the treatment and management of depression in adults (2010) and the CANMAT clinical guidelines for the management of major depressive disorder in adults (2009). Lessons We will share the challenges associated with the adaptation of clinical recommendations and organisational strategies to the local context, and the actual implementation of the care protocol in primary care. We will discuss issues dealing with the applicability and successful uptake of recommendations in local contexts (ex.: availability of resources for guideline adaptation, types of professionals involved, barriers). Background Adaptation of high-quality external guidelines can be an efficient and effective means to develop guidance more rapidly, allowing for shifting of resources to knowledge transfer and health system implementation efforts. Context To describe successful guideline adaptation and implementation strategies used by a large US health care organisation to improve the quality of care for adults with chronic obstructive pulmonary disease (COPD). Description of Best Practice A multidisciplinary guideline team evaluated and adapted a guideline on Chronic Obstructive Pulmonary Disease (COPD) developed by the American College of Physicians, American College of Chest Physicians, American Thoracic Society, and European Respiratory Society (ACP/ACCP/ ATS/ERS). Recommendations were evaluated and modified for implementability based on several dimensions of the GLIA tool. Implementation strategies targeted to physicians included electronic distribution of guidelines, interactive online continuing medical education, and point-of-care encounter support. Implementation efforts targeted to patients included point-of-care education booklets, online resources for COPD self-management, and proactive outreach for spirometry testing. Systems-level interventions included development of patient outreach lists and computerised decision support. Monthly reporting and review on three measures was conducted to monitor performance. Ongoing implementation efforts resulted in increased rates of spirometry testing and management of COPD exacerbations with systemic corticosteroid and bronchodilator medications over a four-year period. Lessons Challenges arise when externally developed guidelines lack the specificity necessary for recommendations to be successfully implemented. Systematic evaluation and modification of recommendations is necessary to enhance implementability at the patient, provider and systems levels, as well as to improve performance. Background...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.