Background Problem-oriented electronic health record (EHR) systems can help physicians to track a patient's status and progress, and organize clinical documentation, which could help improving quality of clinical data and enable data reuse. The problem list is central in a problem-oriented medical record. However, current problem lists remain incomplete because of the lack of end-user training and inaccurate content of underlying terminologies. This leads to modifications of diagnosis code descriptions and use of free-text notes, limiting reuse of data. Objectives We aimed to investigate factors that influence acceptance and actual use of the problem list, and used these to propose recommendations, to increase the value of problem lists for (re)use. Methods Semistructured interviews were conducted with physicians, heads of medical departments, and data quality experts, who were invited through snowball sampling. The interviews were transcribed and coded. Comments were fitted in constructs of the validated framework unified theory of acceptance user technology (UTAUT), and were discussed in terms of facilitators and barriers. Results In total, 24 interviews were conducted. We found large variability in attitudes toward problem list use. Barriers included uncertainty about the responsibility for maintaining the problem list and little perceived benefits. Facilitators included the (re)design of policies, improved (peer-to-peer) training to increase motivation, and positive peer feedback and monitoring. Motivation is best increased through sharing benefits relevant in the care process, such as providing overview, timely generation of discharge or referral letters, and reuse of data. Furthermore, content of the underlying terminology should be improved and the problem list should be better presented in the EHR system. Conclusion To let physicians accept and use the problem list, policies and guidelines should be redesigned, and prioritized by supervising staff. Additionally, peer-to-peer training on the benefits of using the problem list is needed.
Background Accurate, coded problem lists are valuable for data reuse, including clinical decision support and research. However, healthcare providers frequently modify coded diagnoses by including or removing common contextual properties in free-text diagnosis descriptions: uncertainty (suspected glaucoma), laterality (left glaucoma) and temporality (glaucoma 2002). These contextual properties could cause a difference in meaning between underlying diagnosis codes and modified descriptions, inhibiting data reuse. We therefore aimed to develop and evaluate an algorithm to identify these contextual properties. Methods A rule-based algorithm called UnLaTem (Uncertainty, Laterality, Temporality) was developed using a single-center dataset, including 288,935 diagnosis descriptions, of which 73,280 (25.4%) were modified by healthcare providers. Internal validation of the algorithm was conducted with an independent sample of 980 unique records. A second validation of the algorithm was conducted with 996 records from a Dutch multicenter dataset including 175,210 modified descriptions of five hospitals. Two researchers independently annotated the two validation samples. Performance of the algorithm was determined using means of the recall and precision of the validation samples. The algorithm was applied to the multicenter dataset to determine the actual prevalence of the contextual properties within the modified descriptions per specialty. Results For the single-center dataset recall (and precision) for removal of uncertainty, uncertainty, laterality and temporality respectively were 100 (60.0), 99.1 (89.9), 100 (97.3) and 97.6 (97.6). For the multicenter dataset for removal of uncertainty, uncertainty, laterality and temporality it was 57.1 (88.9), 86.3 (88.9), 99.7 (93.5) and 96.8 (90.1). Within the modified descriptions of the multicenter dataset, 1.3% contained removal of uncertainty, 9.9% uncertainty, 31.4% laterality and 9.8% temporality. Conclusions We successfully developed a rule-based algorithm named UnLaTem to identify contextual properties in Dutch modified diagnosis descriptions. UnLaTem could be extended with more trigger terms, new rules and the recognition of term order to increase the performance even further. The algorithm’s rules are available as additional file 2. Implementing UnLaTem in Dutch hospital systems can improve precision of information retrieval and extraction from diagnosis descriptions, which can be used for data reuse purposes such as decision support and research.
Structuring clinical data in electronic health records supports reuse of data to improve quality of care, reduce costs and perform research. This requires terminologies to assign terms from language used in a specific domain to medical concepts. Given the developing character of medical knowledge, these terminologies need continuous maintenance. Nonetheless, little is known about terminology maintenance processes. To specify the (re)design of a terminology maintenance process, we first merged and adapted two static theoretical frameworks that consisted of criteria relating to using a terminology, divided among relevant stakeholders. Following, we applied the framework to the healthcare terminology maintenance process in the Netherlands. We held interviews with relevant stakeholders and used the framework as checklist to identify missing criteria and bottlenecks. Saturation in interviews and fulfilment of the criteria indicated that all bottlenecks were discovered, therefore the framework was considered useful for redesigning a terminology maintenance process. Other countries could benefit from this framework as well to discover and resolve any unfulfilled maintenance criteria.
Background Problem-oriented electronic health record (EHR) systems, with complete, coded and up-to-date (i.e. accurate) problem lists, aid healthcare providers to track patient’s health status and make better decisions, as problem lists can provide useful summaries of important health issues. Unfortunately, problem lists are still incomplete and out-of-date (i.e. inaccurate). That is, among others, because providers are under time pressure, prefer writing free-text notes and are often unwilling to update problem lists, unless receiving returns for their effort. This study aims to assess the impact of accuracy of problem lists in EHRs on clinical decision-making. Methods In a laboratory setting we will perform a crossover randomized controlled trial in which we will recruit individual Dutch healthcare providers on-site at Amsterdam University Medical Centers. Participants will be presented with two records of two patients (A and B): one with an accurate and one with an inaccurate problem list, created in a training environment of the EPIC EHR in agreement with clinical experts. Randomization determines which record will have the accurate problem list. Participants are informed that EHR-usage is investigated and do not know which record has an accurate or inaccurate problem list. Participants will provide a motivated (Yes/No) answer on whether prescribing medication X and medication Y is appropriate. Medication Y cannot be prescribed in both patient records, due to contraindicated diagnosis (A) and diagnosis-related medical history (B). Medication X serves as a control question, and is contraindicated based on an allergy, which is equally documented in the records. The primary outcome measure is the correctness of the motivation for the correct answer of medication Y. Secondary outcome measures are: correctness of X (and Y) with the right motivation(s); total time to answer X and Y where the motivation for Y (and X) is correct. Time stamps are registered between opening the question and confirming two Yes/No answers. Proportion of retrieved correct answers and time to answer(s) retrieval will be compared by Chi-square, McNemar tests and Log Rank survival analyses. Alternative analytical models will be applied if necessary. Discussion If accurate problem lists lead to faster and better decision-making resulting in better patient outcomes, one may be motivated to develop future policies for this area and healthcare providers may be persuaded to use and update problem lists, also leading to improved data quality and opportunities for reuse.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.