Background: Machine learning (ML) is a growing field in medicine. This narrative review describes the current body of literature on ML for clinical decision support in infectious diseases (ID). Objectives: We aim to inform clinicians about the use of ML for diagnosis, classification, outcome prediction and antimicrobial management in ID. Sources: References for this review were identified through searches of MEDLINE/PubMed, EMBASE, Google Scholar, biorXiv, ACM Digital Library, arXiV and IEEE Xplore Digital Library up to July 2019. Content: We found 60 unique ML-clinical decision support systems (ML-CDSS) aiming to assist ID clinicians. Overall, 37 (62%) focused on bacterial infections, 10 (17%) on viral infections, nine (15%) on tuberculosis and four (7%) on any kind of infection. Among them, 20 (33%) addressed the diagnosis of infection, 18 (30%) the prediction, early detection or stratification of sepsis, 13 (22%) the prediction of treatment response, four (7%) the prediction of antibiotic resistance, three (5%) the choice of antibiotic regimen and two (3%) the choice of a combination antiretroviral therapy. The ML-CDSS were developed for intensive care units (n ¼ 24, 40%), ID consultation (n ¼ 15, 25%), medical or surgical wards (n ¼ 13, 20%), emergency department (n ¼ 4, 7%), primary care (n ¼ 3, 5%) and antimicrobial stewardship (n ¼ 1, 2%). Fifty-three ML-CDSS (88%) were developed using data from high-income countries and seven (12%) with data from low-and middle-income countries (LMIC). The evaluation of ML-CDSS was limited to measures of performance (e.g. sensitivity, specificity) for 57 ML-CDSS (95%) and included data in clinical practice for three (5%). Implications: Considering comprehensive patient data from socioeconomically diverse healthcare settings, including primary care and LMICs, may improve the ability of ML-CDSS to suggest decisions adapted to various clinical contexts. Currents gaps identified in the evaluation of ML-CDSS must also be addressed in order to know the potential impact of such tools for clinicians and patients.
We developed an integrated chip for real-time amplification and detection of nucleic acid using pH-sensing complementary metal-oxide semiconductor (CMOS) technology. Here we show an amplification-coupled detection method for directly measuring released hydrogen ions during nucleotide incorporation rather than relying on indirect measurements such as fluorescent dyes. This is a label-free, non-optical, real-time method for detecting and quantifying target sequences by monitoring pH signatures of native amplification chemistries. The chip has ion-sensitive field effect transistor (ISFET) sensors, temperature sensors, resistive heating, signal processing and control circuitry all integrated to create a full system-on-chip platform. We evaluated the platform using two amplification strategies: PCR and isothermal amplification. Using this platform, we genotyped and discriminated unique single-nucleotide polymorphism (SNP) variants of the cytochrome P450 family from crude human saliva. We anticipate this semiconductor technology will enable the creation of devices for cost-effective, portable and scalable real-time nucleic acid analysis.
Greater consideration of the factors that drive non-expert decision making must be considered when designing CDSS interventions. Future work must aim to expand CDSS beyond simply selecting appropriate antimicrobials with clear and systematic reporting frameworks for CDSS interventions developed to address current gaps identified in the reporting of evidence.
Control of blood glucose is essential for diabetes management. Current digital therapeutic approaches for subjects with Type 1 diabetes mellitus (T1DM) such as the artificial pancreas and insulin bolus calculators leverage machine learning techniques for predicting subcutaneous glucose for improved control. Deep learning has recently been applied in healthcare and medical research to achieve state-of-the-art results in a range of tasks including disease diagnosis, and patient state prediction among others. In this work, we present a deep learning model that is capable of forecasting glucose levels with leading accuracy for simulated patient cases (RMSE = 9.38±0.71 [mg/dL] over a 30-minute horizon, RMSE = 18.87±2.25 [mg/dL] over a 60minute horizon) and real patient cases (RMSE = 21.07±2.35 [mg/dL] for 30-minute, RMSE = 33.27±4.79% for 60-minute).In addition, the model provides competitive performance in providing effective prediction horizon (P H ef f ) with minimal time lag both in a simulated patient dataset (P H ef f = 29.0±0.7 for 30-min and P H ef f = 49.8±2.9 for 60-min) and in a real patient dataset (P H ef f = 19.3±3.1 for 30-min and P H ef f = 29.3±9.4 for 60-min). This approach is evaluated on a dataset of 10 simulated cases generated from the UVa/Padova simulator and a clinical dataset of 10 real cases each containing glucose readings, insulin bolus, and meal (carbohydrate) data. Performance of the recurrent convolutional neural network is benchmarked against four algorithms. The proposed algorithm is implemented on an Android mobile phone, with an execution time of 6ms on a phone compared to an execution time of 780ms on a laptop.
For people with Type 1 diabetes (T1D), forecasting of blood glucose (BG) can be used to effectively avoid hyperglycemia, hypoglycemia and associated complications. The latest continuous glucose monitoring (CGM) technology allows people to observe glucose in real-time. However, an accurate glucose forecast remains a challenge. In this work, we introduce GluNet, a framework that leverages on a personalized deep neural network to predict the probabilistic distribution of short-term (30-60 minutes) future CGM measurements for subjects with T1D based on their historical data including glucose measurements, meal information, insulin doses, and other factors. It adopts the latest deep learning techniques consisting of four components: data pre-processing, label transform/recover, multi-layers of dilated convolution neural network (CNN), and post-processing. The method is evaluated in-silico for both adult and adolescent subjects. The results show significant improvements over existing methods in the literature through a comprehensive comparison in terms of root mean square error (RMSE) (8.88 ± 0.77 mg/dL) with short time lag (0.83 ± 0.40 minutes) for prediction horizons (PH) = 30 mins (minutes), and RMSE (19.90 ± 3.17 mg/dL) with time lag (16.43 ± 4.07 mins) for PH = 60 mins for virtual adult subjects. In addition, GluNet is also tested on two clinical data sets. Results show that it achieves an RMSE (19.28±2.76 mg/dL) with time lag (8.03 ± 4.07 mins) for PH = 30 mins and an RMSE (31.83±3.49 mg/dL) with time lag (17.78±8.00 mins) for PH = 60 mins. These are the best reported results for glucose forecasting when compared with other methods including the neural network for predicting glucose (NNPG), the support vector regression (SVR), the latent variable with exogenous input (LVX), and the auto regression with exogenous input (ARX) algorithm.
Background Enhanced methods of drug monitoring are required to support the individualisation of antibiotic dosing. We report the first-in-human evaluation of real-time phenoxymethylpenicillin monitoring using a minimally invasive microneedle-based β-lactam biosensor in healthy volunteers.
The COVID-19 pandemic is a global health emergency characterized by the high rate of transmission and ongoing increase of cases globally. Rapid point-of-care (PoC) diagnostics to detect the causative virus, SARS-CoV-2, are urgently needed to identify and isolate patients, contain its spread and guide clinical management. In this work, we report the development of a rapid PoC diagnostic test (<20 min) based on reverse transcriptase loop-mediated isothermal amplification (RT-LAMP) and semiconductor technology for the detection of SARS-CoV-2 from extracted RNA samples. The developed LAMP assay was tested on a real-time benchtop instrument (RT-qLAMP) showing a lower limit of detection of 10 RNA copies per reaction. It was validated against extracted RNA from 183 clinical samples including 127 positive samples (screened by the CDC RT-qPCR assay). Results showed 91% sensitivity and 100% specificity when compared to RT-qPCR and average positive detection times of 15.45 ± 4.43 min. For validating the incorporation of the RT-LAMP assay onto our PoC platform (RT-eLAMP), a subset of samples was tested (n = 52), showing average detection times of 12.68 ± 2.56 min for positive samples (n = 34), demonstrating a comparable performance to a benchtop commercial instrument. Paired with a smartphone for results visualization and geolocalization, this portable diagnostic platform with secure cloud connectivity will enable real-time case identification and epidemiological surveillance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.