ObjectivesTo evaluate the performance of direct-to-consumer pulse oximeters under clinical conditions, with arterial blood gas measurement (SaO2) as reference standard.DesignCross-sectional, validation study.SettingIntensive care.ParticipantsAdult patients requiring SaO2-monitoring.InterventionsThe studied oximeters are top-selling in Europe/USA (AFAC FS10D, AGPTEK FS10C, ANAPULSE ANP 100, Cocobear, Contec CMS50D1, HYLOGY MD-H37, Mommed YM101, PRCMISEMED F4PRO, PULOX PO-200 and Zacurate Pro Series 500 DL). Directly after collection of a SaO2 blood sample, we obtained pulse oximeter readings (SpO2). SpO2-readings were performed in rotating order, blinded for SaO2 and completed <10 min after blood sample collection.Outcome measuresBias (SpO2–SaO2) mean, root mean square difference (ARMS), mean absolute error (MAE) and accuracy in identifying hypoxaemia (SaO2 ≤90%). As a clinical index test, we included a hospital-grade SpO2-monitor (Philips).ResultsIn 35 consecutive patients, we obtained 2258 SpO2-readings and 234 SaO2-samples. Mean bias ranged from −0.6 to −4.8. None of the pulse oximeters met ARMS ≤3%, the requirement set by International Organisation for Standardisation (ISO)-standards and required for Food and Drug Administration (FDA) 501(k)-clearance. The MAE ranged from 2.3 to 5.1, and five out of ten pulse oximeters met the requirement of ≤3%. For hypoxaemia, negative predictive values were 98%–99%. Positive predictive values ranged from 11% to 30%. Highest accuracy (95% CI) was found for Contec CMS50D1; 91% (86–94) and Zacurate Pro Series 500 DL; 90% (85–94). The hospital-grade SpO2-monitor had an ARMS of 3.0% and MAE of 1.9, and an accuracy of 95% (91%–97%).ConclusionTop-selling, direct-to-consumer pulse oximeters can accurately rule out hypoxaemia, but do not meet ISO-standards required for FDA-clearance
We present a stochastic, limited-memory Broyden Fletcher Goldfarb Shanno (LBFGS) algorithm that is suitable for handling very large amounts of data. A direct application of this algorithm is radio interferometric calibration of raw data at fine time and frequency resolution. Almost all existing radio interferometric calibration algorithms assume that it is possible to fit the dataset being calibrated into memory. Therefore, the raw data is averaged in time and frequency to reduce its size by many orders of magnitude before calibration is performed. However, this averaging is detrimental for the detection of some signals of interest that have narrow bandwidth and time duration such as fast radio bursts (FRBs). Using the proposed algorithm, it is possible to calibrate data at such a fine resolution that they cannot be entirely loaded into memory, thus preserving such signals. As an additional demonstration, we use the proposed algorithm for training deep neural networks and compare the performance against the mainstream first order optimization algorithms that are used in deep learning.
Background: It is thought that ChatGPT, an advanced language model developed by OpenAI, may in the future serve as an AI-assisted decision support tool in medicine. Objective: To evaluate the accuracy of ChatGPTs recommendations on medical questions related to common cardiac symptoms or conditions. Methods: We tested ChatGPTs ability to address medical questions in two ways. First, we assessed its accuracy in correctly answering cardiovascular trivia questions (n=50), based on quizzes for medical professionals. Second, we entered 20 clinical case vignettes on the ChatGPT platform and evaluated its accuracy compared to expert opinion and clinical course. Results: We found that ChatGPT correctly answered 74% of the trivia questions, with slight variation in accuracy in the domains coronary artery disease (80%), pulmonary and venous thrombotic embolism (80%), atrial fibrillation (70%), heart failure (80%) and cardiovascular risk management (60%). In the case vignettes, ChatGPTs response matched in 90% of the cases with the actual advice given. In more complex cases, where physicians (general practitioners) asked other physicians (cardiologists) for assistance or decision support, ChatGPT was correct in 50% of cases, and often provided incomplete or inappropriate recommendations when compared with expert consultation. Conclusions: Our study suggests that ChatGPT has potential as an AI-assisted decision support tool in medicine, particularly for straightforward, low-complex medical questions, but further research is needed to fully evaluate its potential.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.