BackgroundClinical decision support systems (CDSSs) are an integral component of today’s health information technologies. They assist with interpretation, diagnosis, and treatment. A CDSS can be embedded throughout the patient safety continuum providing reminders, recommendations, and alerts to health care providers. Although CDSSs have been shown to reduce medical errors and improve patient outcomes, they have fallen short of their full potential. User acceptance has been identified as one of the potential reasons for this shortfall.ObjectiveThe purpose of this paper was to conduct a critical review and task analysis of CDSS research and to develop a new framework for CDSS design in order to achieve user acceptance.MethodsA critical review of CDSS papers was conducted with a focus on user acceptance. To gain a greater understanding of the problems associated with CDSS acceptance, we conducted a task analysis to identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives: the machine (ie, the CDSS engine) and the user (ie, the physician).ResultsFavorability of CDSSs was based on user acceptance of clinical guidelines, reminders, alerts, and diagnostic suggestions. We propose two models: (1) the user acceptance and system adaptation design model, which includes optimizing CDSS design based on user needs/expectations, and (2) the input-process-output-engagemodel, which reveals to users the processes that govern CDSS outputs.ConclusionsThis research demonstrates that the incorporation of the proposed models will improve user acceptance to support the beneficial effects of CDSSs adoption. Ultimately, if a user does not accept technology, this not only poses a threat to the use of the technology but can also pose a threat to the health and well-being of patients.
Dopamine is a catecholamine that serves as a neurotransmitter in the central and peripheral nervous system. Non-invasive, reliable, and high-throughput techniques for its quantification are needed to assess dysfunctions of the dopaminergic system and monitor therapies. We developed and validated a competitive ELISA for direct determination of dopamine in urine samples. The method provides high specificity, good accuracy, and precision (average inter-assay variation < 12%). The analysis is not affected by general urinary components and structurally related drugs and metabolites. The correlation between ELISA and LC-MS/MS analyses was very good (r = 0.986, n = 28). The reference range was 64-261 lg/g Cr (n = 64). Week-to-week biological variations of second morning urinary dopamine under free-living conditions were 23.9% for within-and 35.5% for between-subject variation (n = 10). The assay is applied in monitoring Parkinson's disease patients under different treatments. Urinary dopamine levels significantly increase in a dose-dependent manner for Parkinson's disease patients under L-DOPA treatment. The present ELISA provides a cost-effective alternative to chromatographic methods to monitor patients receiving dopamine restoring treatment to ensure appropriate dosing and clinical efficacy. The method can be used in pathological research for the assessment of possible peripheral biological markers for disorders related to the dopaminergic system.
Depression is a common disorder with physical and psychological manifestations often associated with low serotonin. Since noninvasive diagnostic tools for depression are sparse, we evaluated the clinical utility of a novel ELISA for the measurement of serotonin in urine from depressed subjects and from subjects under antidepressant therapy. We developed a competitive ELISA for direct measurement of serotonin in derivatized urine samples. Assay performance was evaluated and applied to clinical samples. The analytical range of the assay was from 6.7 to 425 μg serotonin/g creatinine (Cr). The limit of quantification was 4.7 μg/g Cr. The average recovery for spiked urine samples was 104.4%. Average intra-assay variation was 4.4%, and inter-assay variation was <20%. The serotonin analysis was very specific. No significant interferences were observed for 44 structurally and nonstructurally related urinary substances. Very good correlation was observed between urinary serotonin levels measured by ELISA and liquid chromatography tandem mass spectrometry (LC-MS/MS; ELISA = 1.16 × LC-MS/MS - 53.8; r = 0.965; mean % bias = 11%; n = 18). Serotonin was stable in acidified urine for 30 days at room temperature and at -20 °C. The established reference range for serotonin was 54-366 μg/g Cr (n = 64). Serotonin levels detected in depressed patients (87.53 ± 4.89 μg/g Cr; n = 60) were significantly lower (p < 0.001) than in nondepressed subjects (153.38 ± 7.99 μg/g Cr). Urinary excretion of serotonin in depressed individuals significantly increased after antidepressant treatment by 5-hydroxy-tryptophane and/or selective serotonin re-uptake inhibitor (p < 0.01). The present ELISA provides a convenient and robust method for monitoring urinary serotonin. It is suitable to monitor serotonin imbalances and may be particularly helpful in evaluating antidepressant therapies.
Objective: This paper explores the implications of artificial intelligence (AI) on the management of healthcare data and information and how AI technologies will affect the responsibilities and work of health information management (HIM) professionals. Methods: A literature review was conducted of both peer-reviewed literature and published opinions on current and future use of AI technology to collect, store, and use healthcare data. The authors also sought insights from key HIM leaders via semi-structured interviews conducted both on the phone and by email. Results: The following HIM practices are impacted by AI technologies: 1) Automated medical coding and capturing AI-based information; 2) Healthcare data management and data governance; 3) Fbtient privacy and confidentiality; and 4) HIM workforce training and education. Discussion: HIM professionals must focus on improving the quality of coded data that is being used to develop AI applications. HIM professional’s ability to identify data patterns will be an important skill as automation advances, though additional skills in data analysis tools and techniques are needed. In addition, HIM professionals should consider how current patient privacy practices apply to AI application, development, and use. Conclusions: AI technology will continue to evolve as will the role of HIM professionals who are in a unique position to take on emerging roles with their depth of knowledge on the sources and origins of healthcare data. The challenge for HIM professionals is to identify leading practices for the management of healthcare data and information in an AI-enabled world.
Background:Numerous studies have revealed widespread clinician frustration with the usability of electronic health records (EHRs) that is counterproductive to adoption of EHR systems to meet the aims of health-care reform. With poor system usability comes increased risk of negative unintended consequences. Usability issues could lead to user error and workarounds that have the potential to compromise patient safety and negatively impact the quality of care.[1] While there is ample research on EHR usability, there is little information on the usability of laboratory information systems (LISs). Yet, LISs facilitate the timely provision of a great deal of the information needed by physicians to make patient care decisions.[2] Medical and technical advances in genomics that require processing of an increased volume of complex laboratory data further underscore the importance of developing user-friendly LISs. This study aims to add to the body of knowledge on LIS usability.Methods:A survey was distributed among LIS users at hospitals across the United States. The survey consisted of the ten-item System Usability Scale (SUS). In addition, participants were asked to rate the ease of performing 24 common tasks with a LIS. Finally, respondents provided comments on what they liked and disliked about using the LIS to provide diagnostic insight into LIS perceived usability.Results:The overall mean SUS score of 59.7 for the LIS evaluated is significantly lower than the benchmark of 68 (P < 0.001). All LISs evaluated received mean SUS scores below 68 except for Orchard Harvest (78.7). While the years of experience using the LIS was found to be a statistically significant influence on mean SUS scores, the combined effect of years of experience and LIS used did not account for the statistically significant difference in the mean SUS score between Orchard Harvest and each of the other LISs evaluated.Conclusions:The results of this study indicate that overall usability of LISs is poor. Usability lags that of systems evaluated across 446 usability surveys.
Objective: The purpose of this paper is to review the current state of health information technology (HIT) training programs and identify limitations in workforce expectations and student/trainee level of preparedness. A framework is proposed to build a more effective training program, differentiate HIT and health informatics, and emphasize the critical role of interprofessional collaboration for informatics-related curriculum. We define interprofessionalism as the multi-sector collaborations among academia, industry (Health Care Organizations), and vendors to produce competent informaticians. Methods: Critical review of published HIT and health informatics curricular competencies was conducted, including those published by the Office of the National Coordinator (ONC) for HIT, the American Medical Informatics Association (AMIA), the International Medical Informatics Association (IMIA), and the Council on Accreditation for Health Informatics and Information Management. A review of literature related to HIT and health informatics education and training was also completed. Results: The paper presents a framework for promoting health informatics training with an interprofessional foundation. The core components of the curricular competencies include understanding the healthcare system, biomedical data, computer programming, data analytics, usability, and technology infrastructure. To effectively deliver the content, programs require collaboration between academic institutions, healthcare organizations, and industry vendors. Conclusions: HIT and health informatics-related training programs, in their current form, are not meeting industry needs. The proposed framework addresses the current limitations by providing unique pathways for content delivery by promoting interprofessional collaboration and partnerships between academia and industry.
BACKGROUND Clinical decision support systems (CDSSs) are an integral component of today’s health information technologies. They assist with interpretation, diagnosis, and treatment. A CDSS can be embedded throughout the patient safety continuum providing reminders, recommendations, and alerts to health care providers. Although CDSSs have been shown to reduce medical errors and improve patient outcomes, they have fallen short of their full potential. User acceptance has been identified as one of the potential reasons for this shortfall. OBJECTIVE The purpose of this paper was to conduct a critical review and task analysis of CDSS research and to develop a new framework for CDSS design in order to achieve user acceptance. METHODS A critical review of CDSS papers was conducted with a focus on user acceptance. To gain a greater understanding of the problems associated with CDSS acceptance, we conducted a task analysis to identify and describe the goals, user input, system output, knowledge requirements, and constraints from two different perspectives: the machine (ie, the CDSS engine) and the user (ie, the physician). RESULTS Favorability of CDSSs was based on user acceptance of clinical guidelines, reminders, alerts, and diagnostic suggestions. We propose two models: (1) the user acceptance and system adaptation design model, which includes optimizing CDSS design based on user needs/expectations, and (2) the input-process-output-engagemodel, which reveals to users the processes that govern CDSS outputs. CONCLUSIONS This research demonstrates that the incorporation of the proposed models will improve user acceptance to support the beneficial effects of CDSSs adoption. Ultimately, if a user does not accept technology, this not only poses a threat to the use of the technology but can also pose a threat to the health and well-being of patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.