IMPORTANCE In making decisions about patient care, clinicians raise questions and are unable to pursue or find answers to most of them. Unanswered questions may lead to suboptimal patient care decisions. OBJECTIVE To systematically review studies that examined the questions clinicians raise in the context of patient care decision making.
Objective The amount of information for clinicians and clinical researchers is growing exponentially. Text summarization reduces information as an attempt to enable users to find and understand relevant source texts more quickly and effortlessly. In recent years, substantial research has been conducted to develop and evaluate various summarization techniques in the biomedical domain. The goal of this study was to systematically review recent published research on summarization of textual documents in the biomedical domain. Materials and methods MEDLINE (2000 to October 2013), IEEE Digital Library, and the ACM Digital library were searched. Investigators independently screened and abstracted studies that examined text summarization techniques in the biomedical domain. Information is derived from selected articles on five dimensions: input, purpose, output, method and evaluation. Results Of 10,786 studies retrieved, 34 (0.3%) met the inclusion criteria. Natural Language processing (17; 50%) and a Hybrid technique comprising of statistical, Natural language processing and machine learning (15; 44%) were the most common summarization approaches. Most studies (28; 82%) conducted an intrinsic evaluation. Discussion This is the first systematic review of text summarization in the biomedical domain. The study identified research gaps and provides recommendations for guiding future research on biomedical text summarization. conclusion Recent research has focused on a Hybrid technique comprising statistical, language processing and machine learning techniques. Further research is needed on the application and evaluation of text summarization in real research or patient care settings.
nology. ''Sec. 3002. HIT Policy Committee. ''Sec. 3003. HIT Standards Committee. ''Sec. 3004. Process for adoption of endorsed recommendations; adoption of initial set of standards, implementation specifications, and certification criteria. ''Sec. 3005. Application and use of adopted standards and implementation specifications by Federal agencies. ''Sec. 3006. Voluntary application and use of adopted standards and implementation specifications by private entities. ''Sec. 3007. Federal health information technology. ''Sec. 3008. Transitions. ''Sec. 3009. Miscellaneous provisions. Sec. 13102. Technical amendment. PART 2-APPLICATION AND USE OF ADOPTED HEALTH INFORMATION TECHNOLOGY
The OMOP CDM best met the criteria for supporting data sharing from longitudinal EHR-based studies. Conclusions may differ for other uses and associated data element sets, but the methodology reported here is easily adaptable to common data model evaluation for other uses.
The results support the hypothesis that topic links are more efficient than nonspecific links regarding the time seeking for information. It is unclear whether the statistical difference demonstrated will result in a clinically significant impact. However, the overall results confirm previous evidence that infobuttons are effective at helping clinicians to answer questions at the point of care and demonstrate a modest incremental change in the efficiency of information delivery for routine users of this tool.
BackgroundClinical experts’ cognitive mechanisms for managing complexity have implications for the design of future innovative healthcare systems. The purpose of the study is to examine the constituents of decision complexity and explore the cognitive strategies clinicians use to control and adapt to their information environment.MethodsWe used Cognitive Task Analysis (CTA) methods to interview 10 Infectious Disease (ID) experts at the University of Utah and Salt Lake City Veterans Administration Medical Center. Participants were asked to recall a complex, critical and vivid antibiotic-prescribing incident using the Critical Decision Method (CDM), a type of Cognitive Task Analysis (CTA). Using the four iterations of the Critical Decision Method, questions were posed to fully explore the incident, focusing in depth on the clinical components underlying the complexity. Probes were included to assess cognitive and decision strategies used by participants.ResultsThe following three themes emerged as the constituents of decision complexity experienced by the Infectious Diseases experts: 1) the overall clinical picture does not match the pattern, 2) a lack of comprehension of the situation and 3) dealing with social and emotional pressures such as fear and anxiety. All these factors contribute to decision complexity. These factors almost always occurred together, creating unexpected events and uncertainty in clinical reasoning. Five themes emerged in the analyses of how experts deal with the complexity. Expert clinicians frequently used 1) watchful waiting instead of over- prescribing antibiotics, engaged in 2) theory of mind to project and simulate other practitioners’ perspectives, reduced very complex cases into simple 3) heuristics, employed 4) anticipatory thinking to plan and re-plan events and consulted with peers to share knowledge, solicit opinions and 5) seek help on patient cases.ConclusionThe cognitive strategies to deal with decision complexity found in this study have important implications for design future decision support systems for the management of complex patients.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-015-0221-z) contains supplementary material, which is available to authorized users.
BackgroundA major barrier to the practice of evidence-based medicine is efficiently finding scientifically sound studies on a given clinical topic.ObjectiveTo investigate a deep learning approach to retrieve scientifically sound treatment studies from the biomedical literature.MethodsWe trained a Convolutional Neural Network using a noisy dataset of 403,216 PubMed citations with title and abstract as features. The deep learning model was compared with state-of-the-art search filters, such as PubMed’s Clinical Query Broad treatment filter, McMaster’s textword search strategy (no Medical Subject Heading, MeSH, terms), and Clinical Query Balanced treatment filter. A previously annotated dataset (Clinical Hedges) was used as the gold standard.ResultsThe deep learning model obtained significantly lower recall than the Clinical Queries Broad treatment filter (96.9% vs 98.4%; P<.001); and equivalent recall to McMaster’s textword search (96.9% vs 97.1%; P=.57) and Clinical Queries Balanced filter (96.9% vs 97.0%; P=.63). Deep learning obtained significantly higher precision than the Clinical Queries Broad filter (34.6% vs 22.4%; P<.001) and McMaster’s textword search (34.6% vs 11.8%; P<.001), but was significantly lower than the Clinical Queries Balanced filter (34.6% vs 40.9%; P<.001).ConclusionsDeep learning performed well compared to state-of-the-art search filters, especially when citations were not indexed. Unlike previous machine learning approaches, the proposed deep learning model does not require feature engineering, or time-sensitive or proprietary features, such as MeSH terms and bibliometrics. Deep learning is a promising approach to identifying reports of scientifically rigorous clinical research. Further work is needed to optimize the deep learning model and to assess generalizability to other areas, such as diagnosis, etiology, and prognosis.
BackgroundClinicians use electronic knowledge resources, such as Micromedex, UpToDate, and Wikipedia, to deliver evidence-based care and engage in point-of-care learning. Despite this use in clinical practice, their impact on patient care and learning outcomes is incompletely understood. A comprehensive synthesis of available evidence regarding the effectiveness of electronic knowledge resources would guide clinicians, health care system administrators, medical educators, and informaticians in making evidence-based decisions about their purchase, implementation, and use.ObjectiveThe aim of this review is to quantify the impact of electronic knowledge resources on clinical and learning outcomes.MethodsWe searched MEDLINE, Embase, PsycINFO, and the Cochrane Library for articles published from 1991 to 2017. Two authors independently screened studies for inclusion and extracted outcomes related to knowledge, skills, attitudes, behaviors, patient effects, and cost. We used random-effects meta-analysis to pool standardized mean differences (SMDs) across studies.ResultsOf 10,811 studies screened, we identified 25 eligible studies published between 2003 and 2016. A total of 5 studies were randomized trials, 22 involved physicians in practice or training, and 10 reported potential conflicts of interest. A total of 15 studies compared electronic knowledge resources with no intervention. Of these, 7 reported clinician behaviors, with a pooled SMD of 0.47 (95% CI 0.27 to 0.67; P<.001), and 8 reported objective patient effects with a pooled SMD of 0.19 (95% CI 0.07 to 0.32; P=.003). Heterogeneity was large (I2>50%) across studies. When compared with other resources—7 studies, not amenable to meta-analytic pooling—the use of electronic knowledge resources was associated with increased frequency of answering questions and perceived benefits on patient care, with variable impact on time to find an answer. A total of 2 studies compared different implementations of the same electronic knowledge resource.ConclusionsUse of electronic knowledge resources is associated with a positive impact on clinician behaviors and patient effects. We found statistically significant associations between the use of electronic knowledge resources and improved clinician behaviors and patient effects. When compared with other resources, the use of electronic knowledge resources was associated with increased success in answering clinical questions, with variable impact on speed. Comparisons of different implementation strategies of the same electronic knowledge resource suggest that there are benefits from allowing clinicians to choose to access the resource, versus automated display of resource information, and from integrating patient-specific information. A total of 4 studies compared different commercial electronic knowledge resources, with variable results. Resource implementation strategies can significantly influence outcomes but few studies have examined such factors.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.