This is an open access article under the terms of the Creat ive Commo ns Attri butio n-NonCo mmerc ial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
Objectives Scanned documents (SDs), while common in electronic health records and potentially rich in clinically relevant information, rarely fit well with clinician workflow. Here, we identify scanned imaging reports requiring follow-up with high recall and practically useful precision. Materials and methods We focused on identifying imaging findings for 3 common causes of malpractice claims: (1) potentially malignant breast (mammography) and (2) lung (chest computed tomography [CT]) lesions and (3) long-bone fracture (X-ray) reports. We train our ClinicalBERT-based pipeline on existing typed/dictated reports classified manually or using ICD-10 codes, evaluate using a test set of manually classified SDs, and compare against string-matching (baseline approach). Results A total of 393 mammograms, 305 chest CT, and 683 bone X-ray reports were manually reviewed. The string-matching approach had an F1 of 0.667. For mammograms, chest CTs, and bone X-rays, respectively: models trained on manually classified training data and optimized for F1 reached an F1 of 0.900, 0.905, and 0.817, while separate models optimized for recall achieved a recall of 1.000 with precisions of 0.727, 0.518, and 0.275. Models trained on ICD-10-labelled data and optimized for F1 achieved F1 scores of 0.647, 0.830, and 0.643, while those optimized for recall achieved a recall of 1.0 with precisions of 0.407, 0.683, and 0.358. Discussion Our pipeline can identify abnormal reports with potentially useful performance and so decrease the manual effort required to screen for abnormal findings that require follow-up. Conclusion It is possible to automatically identify clinically significant abnormalities in SDs with high recall and practically useful precision in a generalizable and minimally laborious way.
Introduction In the context of competency‐based medical education, poor student performance must be accurately documented to allow learners to improve and to protect the public. However, faculty may be reluctant to provide evaluations that could be perceived as negative, and clerkship directors report that some students pass who should have failed. Student perception of faculty may be considered in faculty promotion, teaching awards, and leadership positions. Therefore, faculty of lower academic rank may perceive themselves to be more vulnerable and, therefore, be less likely to document poor student performance. This study investigated faculty characteristics associated with low performance evaluations (LPEs). Method The authors analysed individual faculty evaluations of medical students who completed the third‐year clerkships over 15 years using a generalised mixed regression model to assess the association of evaluator academic rank with likelihood of an LPE. Other available factors related to experience or academic vulnerability were incorporated including faculty age, race, ethnicity, and gender. Results The authors identified 50 120 evaluations by 585 faculty on 3447 students between January 2007 and April 2021. Faculty were more likely to give LPEs at the midpoint (4.9%), compared with the final (1.6%), evaluation (odds ratio [OR] = 4.004, 95% confidence interval [CI] [3.59, 4.53]; p < 0.001). The likelihood of LPE decreased significantly during the 15‐year study period (OR = 0.94 [0.90, 0.97]; p < 0.01). Full professors were significantly more likely to give an LPE than assistant professors (OR = 1.62 [1.08, 2.43]; p = 0.02). Women were more likely to give LPEs than men (OR = 1.88 [1.37, 2.58]; p 0.01). Other faculty characteristics including race and experience were not associated with LPE. Conclusions The number of LPEs decreased over time, and senior faculty were more likely to document poor medical student performance compared with assistant professors.
This paper describes a natural language processing (NLP) approach to extracting lactation-specific drug information from two sources: FDA-mandated drug labels and the NLM Drugs and Lactation Database (LactMed). A frame semantic approach is utilized, and the paper describes the selected frames, their annotation on a set of 900 sections from drug labels and LactMed articles, and the NLP system to extract such frame instances automatically. The ultimate goal of the project is to use such a system to identify discrepancies in lactation-related drug information between these resources.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.