The performance of portable and wearable biosensors is highly influenced by motion artifact. In this paper, a novel real-time adaptive algorithm is proposed for accurate motion-tolerant extraction of heart rate (HR) and pulse oximeter oxygen saturation ( SpO2) from wearable photoplethysmographic (PPG) biosensors. The proposed algorithm removes motion artifact due to various sources including tissue effect and venous blood changes during body movements and provides noise-free PPG waveforms for further feature extraction. A two-stage normalized least mean square adaptive noise canceler is designed and validated using a novel synthetic reference signal at each stage. Evaluation of the proposed algorithm is done by Bland-Altman agreement and correlation analyses against reference HR from commercial ECG and SpO2 sensors during standing, walking, and running at different conditions for a single- and multisubject scenarios. Experimental results indicate high agreement and high correlation (more than 0.98 for HR and 0.7 for SpO2 extraction) between measurements by reference sensors and our algorithm.
Computer vision (CV) has achieved great success in interpreting semantic meanings from images, yet CV algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. One of this tasks is in-bed human pose estimation, which has significant values in many healthcare applications. In-bed pose monitoring in natural settings could involve complete darkness or full occlusion. Furthermore, the lack of publicly available in-bed pose datasets hinders the use of many successful pose estimation algorithms for this task. In this paper, we introduce our Simultaneously-collected multimodal Lying Pose (SLP) dataset, which includes in-bed pose images from 109 participants captured using multiple imaging modalities including RGB, long wave infrared, depth, and pressure map. We also present a physical hyper parameter tuning strategy for ground truth pose label generation under extreme conditions such as lights off and being fully covered by a sheet/blanket. SLP design is compatible with the mainstream human pose datasets, therefore, the state-of-the-art 2D pose estimation models can be trained effectively with SLP data with promising performance as high as 95% at PCKh@0.5 on a single modality. The pose estimation performance can be further improved by including additional modalities through collaboration.
Pressure ulcer is an age-old problem imposing a huge cost to our health care system. Detecting and keeping record of the patient's posture on bed, help care givers reposition patient more efficiently and reduce the risk of developing pressure ulcer. In this paper, a commercial pressure mapping system is used to create a time-stamped, whole-body pressure map of the patient. An image-based processing algorithm is developed to keep an unobtrusive and informative record of patient's bed posture over time. The experimental results show that proposed algorithm can predict patient's bed posture with up to 97.7% average accuracy. This algorithm could ultimately be used with current support surface technologies to reduce the risk of ulcer development.
Human in-bed pose estimation has huge practical values in medical and healthcare applications yet still mainly relies on expensive pressure mapping (PM) solutions. In this paper, we introduce our novel physics inspired visionbased approach that addresses the challenging issues associated with the in-bed pose estimation problem including monitoring a fully covered person in complete darkness. We reformulated this problem using our proposed Under the Cover Imaging via Thermal Diffusion (UCITD) method to capture the high resolution pose information of the body even when it is fully covered by using a long wavelength IR technique. We proposed a physical hyperparameter concept through which we achieved high quality groundtruth pose labels in different modalities. A fully annotated in-bed pose dataset called Simultaneously-collected multimodal Lying Pose (SLP) is also formed/released with the same order of magnitude as most existing large-scale human pose datasets to support complex models' training and evaluation. A network trained from scratch on it and tested on two diverse settings, one in a living room and the other in a hospital room showed pose estimation performance of 98.0% and 96.0% in PCK0.2 standard, respectively. Moreover, in a multi-factor comparison with a state-of-the art in-bed pose monitoring solution based on PM, our solution showed significant superiority in all practical aspects by being 60 times cheaper, 300 times smaller, while having higher pose recognition granularity and accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.