The COVID-19 pandemic has had a substantial and global impact on health care, and has greatly accelerated the adoption of digital technology. One of these emerging digital technologies, blockchain, has unique characteristics (eg, immutability, decentralisation, and transparency) that can be useful in multiple domains (eg, management of electronic medical records and access rights, and mobile health). We conducted a systematic review of COVID-19-related and non-COVID-19-related applications of blockchain in health care. We identified relevant reports published in MEDLINE, SpringerLink, Institute of Electrical and Electronics Engineers Xplore, ScienceDirect, arXiv, and Google Scholar up to July 29, 2021. Articles that included both clinical and technical designs, with or without prototype development, were included. A total of 85 375 articles were evaluated, with 415 full length reports (37 related to COVID-19 and 378 not related to COVID-19) eventually included in the final analysis. The main COVID-19-related applications reported were pandemic control and surveillance, immunity or vaccine passport monitoring, and contact tracing. The top three non-COVID-19-related applications were management of electronic medical records, internet of things (eg, remote monitoring or mobile health), and supply chain monitoring. Most reports detailed technical performance of the blockchain prototype platforms (277 [66·7%] of 415), whereas nine (2·2%) studies showed real-world clinical application and adoption. The remaining studies (129 [31·1%] of 415) were themselves of a technical design only. The most common platforms used were Ethereum and Hyperledger. Blockchain technology has numerous potential COVID-19-related and non-COVID-19-related applications in health care. However, much of the current research remains at the technical stage, with few providing actual clinical applications, highlighting the need to translate foundational blockchain technology into clinical use.
Background Blockchain technology has the potential to enable more secure, transparent, and equitable data management. In the health care domain, it has been applied most frequently to electronic health records. In addition to securely managing data, blockchain has significant advantages in distributing data access, control, and ownership to end users. Due to this attribute, among others, the use of blockchain to power personal health records (PHRs) is especially appealing. Objective This review aims to examine the current landscape, design choices, limitations, and future directions of blockchain-based PHRs. Methods Adopting the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, a cross-disciplinary systematic review was performed in July 2020 on all eligible articles, including gray literature, from the following 8 databases: ACM, IEEE Xplore, MEDLINE, ScienceDirect, Scopus, SpringerLink, Web of Science, and Google Scholar. Three reviewers independently performed a full-text review and data abstraction using a standardized data collection form. Results A total of 58 articles met the inclusion criteria. In the review, we found that the blockchain PHR space has matured over the past 5 years, from purely conceptual ideas initially to an increasing trend of publications describing prototypes and even implementations. Although the eventual application of blockchain in PHRs is intended for the health care industry, the majority of the articles were found in engineering or computer science publications. Among the blockchain PHRs described, permissioned blockchains and off-chain storage were the most common design choices. Although 18 articles described a tethered blockchain PHR, all of them were at the conceptual stage. Conclusions This review revealed that although research interest in blockchain PHRs is increasing and that the space is maturing, this technology is still largely in the conceptual stage. Being the first systematic review on blockchain PHRs, this review should serve as a basis for future reviews to track the development of the space.
Background: Early warning scores (EWS) have been developed as clinical prognostication tools to identify acutely deteriorating patients. In the past few years, there has been a proliferation of studies that describe the development and validation of novel machine learning-based EWS. Systematic reviews of published studies which focus on evaluating performance of both well-established and novel EWS have shown conflicting conclusions. A possible reason is the heterogeneity in validation methods applied. In this review, we aim to examine the methodologies and metrics used in studies which perform EWS validation. Methods: A systematic review of all eligible studies from the MEDLINE database and other sources, was performed. Studies were eligible if they performed validation on at least one EWS and reported associations between EWS scores and inpatient mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA) of adults. Two reviewers independently did a full-text review and performed data abstraction by using standardized data-worksheet based on the TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) checklist. Meta-analysis was not performed due to heterogeneity. Results: The key differences in validation methodologies identified were (1) validation dataset used, (2) outcomes of interest, (3) case definition, time of EWS use and aggregation methods, and (4) handling of missing values. In terms of case definition, among the 48 eligible studies, 34 used the patient episode case definition while 12 used the observation set case definition, and 2 did the validation using both case definitions. Of those that used the patient episode case definition, 18 studies validated the EWS at a single point of time, mostly using the first recorded observation. The review also found more than 10 different performance metrics reported among the studies. Conclusions: Methodologies and performance metrics used in studies performing validation on EWS were heterogeneous hence making it difficult to interpret and compare EWS performance. Standardizing EWS validation methodology and reporting can potentially address this issue.
Objective: After radical prostatectomy (RP), one-third of patients will experience biochemical recurrence (BCR), which is associated with subsequent metastasis and cancer-specific mortality. We employed machine learning (ML) algorithms to predict BCR after RP, and compare them with traditional regression models and nomograms.Methods: Utilizing a prospective Uro-oncology registry, 18 clinicopathological parameters of 1130 consecutive patients who underwent RP (2009RP ( -2018 were recorded, yielding over 20,000 data points for analysis. The data set was split into a 70:30 ratio for training and validation. Three ML models: Naïve Bayes (NB), random forest (RF), and support vector machine (SVM) were studied, and compared with traditional regression models and nomograms (Kattan, CAPSURE, John Hopkins [JHH]) to predict BCR at 1, 3, and 5 years. Results: Over a median follow-up of 70.0 months, 176 (15.6%) developed BCR, at a median time of 16.0 months (interquartile range [IQR]: 11.0-26.0). Multivariate analyses demonstrated strongest association of BCR with prostate-specific antigen (PSA) (p: 0.015), positive surgical margins (p < 0.001), extraprostatic extension (p: 0.002), seminal vesicle invasion (p: 0.004), and grade group (p < 0.001). The 3 ML models demonstrated good prediction of BCR at 1, 3, and 5 years, with the area under curves (AUC) of NB at 0.894, 0.876, and 0.894, RF at 0.846, 0.875, and 0.888, and SVM at 0.835, 0.850, and 0.855, respectively. All models demonstrated (1) robust accuracy (>0.82), (2) good calibration with minimal overfitting, (3) longitudinal consistency across the three time points, and (4) inter-model validity. The ML models were comparable to traditional regression analyses (AUC: 0.797, 0.848, and 0.862) and outperformed the three nomograms: Kattan (AUC: 0.815, 0.798, and
The broad adoption of electronic health records (EHRs) has led to vast amounts of data being accumulated on a patient’s history, diagnosis, prescriptions, and lab tests. Advances in recommender technologies have the potential to utilize this information to help doctors personalize the prescribed medications. However, existing medication recommendation systems have yet to make use of all these information sources in a seamless manner, and they do not provide a justification on why a particular medication is recommended. In this work, we design a two-stage personalized medication recommender system called PREMIER that incorporates information from the EHR. We utilize the various weights in the system to compute the contributions from the information sources for the recommended medications. Our system models the drug interaction from an external drug database and the drug co-occurrence from the EHR as graphs. Experiment results on MIMIC-III and a proprietary outpatient dataset show that PREMIER outperforms state-of-the-art medication recommendation systems while achieving the best tradeoff between accuracy and drug-drug interaction. Case studies demonstrate that the justifications provided by PREMIER are appropriate and aligned to clinical practices.
Background Early warning scores (EWS) have been developed as clinical prognostication tools to identify acutely deteriorating patients. With recent advancements in machine learning, there has been a proliferation of studies that describe the development and validation of novel EWS. Systematic reviews of published studies which focus on evaluating performance of both well-established and novel EWS have shown conflicting conclusions. A possible reason for this is the lack of consistency in the validation methods used. In this review, we aim to examine the methodologies and performance metrics used in studies which describe EWS validation.Methods A systematic review of all eligible studies in the MEDLINE database from inception to 22-Feb-2019 was performed. Studies were eligible if they performed validation on at least one EWS and reported associations between EWS scores and mortality, intensive care unit (ICU) transfers, or cardiac arrest (CA) of adults within the inpatient setting. Two reviewers independently did a full-text review and performed data abstraction by using standardized data-worksheet based on the TRIPOD (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) checklist. Meta-analysis was not performed due to heterogeneity.Results The key differences in validation methodologies identified were (1) validation population characteristics, (2) outcomes of interest, (3) case definition, intended time of use and aggregation methods, and (4) handling of missing values in the validation dataset. In terms of case definition, among the 34 eligible studies, 22 used the patient episode case definition while 10 used the observation set case definition, and 2 did the validation using both case definitions. Of those that used the patient episode case definition, 11 studies used a single point of time score to validate the EWS, most of which used the first recorded observation. There were also more than 10 different performance metrics reported among the studies.Conclusions Methodologies and performance metrics used in studies performing validation on EWS were not consistent hence making it difficult to interpret and compare EWS performance. Standardizing EWS validation methodology and reporting can potentially address this issue.
Background Clinical risk prediction models (CRPMs) use patient characteristics to estimate the probability of having or developing a particular disease and/or outcome. While CRPMs are gaining in popularity, they have yet to be widely adopted in clinical practice. The lack of explainability and interpretability has limited their utility. Explainability is the extent of which a model’s prediction process can be described. Interpretability is the degree to which a user can understand the predictions made by a model. Methods The study aimed to demonstrate utility of patient similarity analytics in developing an explainable and interpretable CRPM. Data was extracted from the electronic medical records of patients with type-2 diabetes mellitus, hypertension and dyslipidaemia in a Singapore public primary care clinic. We used modified K-nearest neighbour which incorporated expert input, to develop a patient similarity model on this real-world training dataset (n = 7,041) and validated it on a testing dataset (n = 3,018). The results were compared using logistic regression, random forest (RF) and support vector machine (SVM) models from the same dataset. The patient similarity model was then implemented in a prototype system to demonstrate the identification, explainability and interpretability of similar patients and the prediction process. Results The patient similarity model (AUROC = 0.718) was comparable to the logistic regression (AUROC = 0.695), RF (AUROC = 0.764) and SVM models (AUROC = 0.766). We packaged the patient similarity model in a prototype web application. A proof of concept demonstrated how the application provided both quantitative and qualitative information, in the form of patient narratives. This information was used to better inform and influence clinical decision-making, such as getting a patient to agree to start insulin therapy. Conclusions Patient similarity analytics is a feasible approach to develop an explainable and interpretable CRPM. While the approach is generalizable, it can be used to develop locally relevant information, based on the database it searches. Ultimately, such an approach can generate a more informative CRPMs which can be deployed as part of clinical decision support tools to better facilitate shared decision-making in clinical practice.
Patient similarity analytics has emerged as an essential tool to identify cohorts of patients who have similar clinical characteristics to some specific patient of interest. In this study, we propose a patient similarity measure called D3K that incorporates domain knowledge and data-driven insights. Using the electronic health records (EHRs) of 169,434 patients with either diabetes, hypertension or dyslipidaemia (DHL), we construct patient feature vectors containing demographics, vital signs, laboratory test results, and prescribed medications. We discretize the variables of interest into various bins based on domain knowledge and make the patient similarity computation to be aligned with clinical guidelines. Key findings from this study are: (1) D3K outperforms baseline approaches in all seven sub-cohorts; (2) our domain knowledge-based binning strategy outperformed the traditional percentile-based binning in all seven sub-cohorts; (3) there is substantial agreement between D3K and physicians (κ = 0.746), indicating that D3K can be applied to facilitate shared decision making. This is the first study to use patient similarity analytics on a cardiometabolic syndrome-related dataset sourced from medical institutions in Singapore. We consider patient similarity among patient cohorts with the same medical conditions to develop localized models for personalized decision support to improve the outcomes of a target patient.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.