Smartphone cameras can measure heart rate (HR) by detecting pulsatile photoplethysmographic (iPPG) signals from post-processing the video of a subject’s face. The iPPG signal is often derived from variations in the intensity of the green channel as shown by Poh et. al. and Verkruysse et. al.. In this pilot study, we have introduced a novel iPPG method where by measuring variations in color of reflected light, i.e., Hue, and can therefore measure both HR and respiratory rate (RR) from the video of a subject’s face. This paper was performed on 25 healthy individuals (Ages 20–30, 15 males and 10 females, and skin color was Fitzpatrick scale 1–6). For each subject we took two 20 second video of the subject’s face with minimal movement, one with flash ON and one with flash OFF. While recording the videos we simultaneously measuring HR using a Biosync B-50DL Finger Heart Rate Monitor, and RR using self-reporting. This paper shows that our proposed approach of measuring iPPG using Hue (range 0–0.1) gives more accurate readings than the Green channel. HR/Hue (range 0–0.1) (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$r=0.9201$ \end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$p$ \end{document}-value = 4.1617, and RMSE = 0.8887) is more accurate compared with HR/Green (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$r=0.4916$ \end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$p$ \end{document}-value = 11.60172, and RMSE = 0.9068). RR/Hue (range 0–0.1) (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$r=0.6575$ \end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} }{}$p$ \end{document}-value = 0.2885, and RMSE = 3.8884) is more accurate compared with RR/Green (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{upgreek} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{docu...
In this paper, we propose a lightweight neural network for real-time electrocardiogram (ECG) anomaly detection and system level power reduction of wearable Internet of Things (IoT) Edge sensors. The proposed network utilizes a novel hybrid architecture consisting of Long Short Term Memory (LSTM) cells and Multi-Layer Perceptrons (MLP). The LSTM block takes a sequence of coefficients representing the morphology of ECG beats while the MLP input layer is fed with features derived from instantaneous heart rate. Simultaneous training of the blocks pushes the overall network to learn distinct features complementing each other for making decisions. The network was evaluated in terms of accuracy, computational complexity, and power consumption using data from the MIT-BIH arrhythmia database. To address the class imbalance in the dataset, we augmented the dataset using SMOTE algorithm for network training. The network achieved an average classification accuracy of 97% across several records in the database. Further, the network was mapped to a fixed point model, retrained in a bit accurate fixed-point environment to compensate for the quantization error, and ported to an ARM Cortex M4 based embedded platform. In laboratory testing, the overall system was successfully demonstrated, and a significant saving of ≃50% power was achieved by gating the wireless transmission using the classifier. Wireless transmission was enabled only to transmit the beats deemed anomalous by the classifier. The proposed technique compares favourably with current methods in terms of computational complexity and has the advantage of standalone operation in the edge node, without the need for always-on wireless connectivity making it ideal for IoT wearable devices.
With advances in circuit design and sensing technology, the acquisition of data from a large number of Internet of Things (IoT) sensors simultaneously to enable more accurate inferences has become mainstream. In this work, we propose a novel convolutional neural network (CNN) model for the fusion of multimodal and multiresolution data obtained from several sensors. The proposed model enables the fusion of multiresolution sensor data, without having to resort to padding/ resampling to correct for frequency resolution differences even when carrying out temporal inferences like high-resolution event detection. The performance of the proposed model is evaluated for sleep apnea event detection, by fusing three different sensor signals obtained from UCD St. Vincent University Hospital's sleep apnea database. The proposed model is generalizable and this is demonstrated by incremental performance improvements, proportional to the number of sensors used for fusion. A selective dropout technique is used to prevent overfitting of the model to any specific high-resolution input, and increase the robustness of fusion to signal corruption from any sensor source. A fusion model with electrocardiogram (ECG), Peripheral oxygen saturation signal (SpO2), and abdominal movement signal achieved an accuracy of 99.72% and a sensitivity of 98.98%. Energy per classification of the proposed fusion model was estimated to be approximately 5.61 µJ for on-chip implementation. The feasibility of pruning to reduce the complexity of the fusion models was also studied.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.