Introduction Many health providers and communicators who are concerned that patients will not understand numbers instead use verbal probabilities (e.g., terms such as “rare” or “common”) to convey the gist of a health message. Objective To assess patient interpretation of and preferences for verbal probability information in health contexts. Methods We conducted a systematic review of literature published through September 2020. Original studies conducted in English with samples representative of lay populations were included if they assessed health-related information and elicited either (a) numerical estimates of verbal probability terms or (b) preferences for verbal vs. quantitative risk information. Results We identified 33 original studies that referenced 145 verbal probability terms, 45 of which were included in at least two studies and 19 in three or more. Numerical interpretations of each verbal term were extremely variable. For example, average interpretations of the term “rare” ranged from 7 to 21%, and for “common,” the range was 34 to 71%. In a subset of 9 studies, lay estimates of verbal probability terms were far higher than the standard interpretations established by the European Commission for drug labels. In 10 of 12 samples where preferences were elicited, most participants preferred numerical information, alone or in combination with verbal labels. Conclusion Numerical interpretation of verbal probabilities is extremely variable and does not correspond well to the numerical probabilities established by expert panels. Most patients appear to prefer quantitative risk information, alone or in combination with verbal labels. Health professionals should be aware that avoiding numeric information to describe risks may not match patient preferences, and that patients interpret verbal risk terms in a highly variable way.
The recognition, disambiguation, and expansion of medical abbreviations and acronyms is of upmost importance to prevent medically-dangerous misinterpretation in natural language processing. To support recognition, disambiguation, and expansion, we present the Medical Abbreviation and Acronym Meta-Inventory, a deep database of medical abbreviations. A systematic harmonization of eight source inventories across multiple healthcare specialties and settings identified 104,057 abbreviations with 170,426 corresponding senses. Automated cross-mapping of synonymous records using state-of-the-art machine learning reduced redundancy, which simplifies future application. Additional features include semi-automated quality control to remove errors. The Meta-Inventory demonstrated high completeness or coverage of abbreviations and senses in new clinical text, a substantial improvement over the next largest repository (6–14% increase in abbreviation coverage; 28–52% increase in sense coverage). To our knowledge, the Meta-Inventory is the most complete compilation of medical abbreviations and acronyms in American English to-date. The multiple sources and high coverage support application in varied specialties and settings. This allows for cross-institutional natural language processing, which previous inventories did not support. The Meta-Inventory is available at https://bit.ly/github-clinical-abbreviations.
IntroductionThe number of readmission risk prediction models available has increased rapidly, and these models are used extensively for health decision-making. Unfortunately, readmission models can be subject to flaws in their development and validation, as well as limitations in their clinical usefulness.ObjectiveTo critically appraise readmission models in the published literature using Delphi-based recommendations for their development and validation.MethodsWe used the modified Delphi process to create Critical Appraisal of Models that Predict Readmission (CAMPR), which lists expert recommendations focused on development and validation of readmission models. Guided by CAMPR, two researchers independently appraised published readmission models in two recent systematic reviews and concurrently extracted data to generate reference lists of eligibility criteria and risk factors.ResultsWe found that published models (n=81) followed 6.8 recommendations (45%) on average. Many models had weaknesses in their development, including failure to internally validate (12%), failure to account for readmission at other institutions (93%), failure to account for missing data (68%), failure to discuss data preprocessing (67%) and failure to state the model’s eligibility criteria (33%).ConclusionsThe high prevalence of weaknesses in model development identified in the published literature is concerning, as these weaknesses are known to compromise predictive validity. CAMPR may support researchers, clinicians and administrators to identify and prevent future weaknesses in model development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.