Vital-sign estimation using ultra-wideband (UWB) radar is preferable because it is contactless and less privacy-invasive. Recently, many approaches have been proposed for estimating heart rate from UWB radar data. However, their performance is still not reliable enough for practical applications. To improve the accuracy, this study employs convolutional neural networks to learn the special patterns of the heartbeats. In the proposed system, skin displacements of the target person are measured using UWB radar, and the radar signal is converted to a two-dimensional matrix, which is used as the input of the designed neural networks. Meanwhile, two triangular waves corresponding to the peaks and valleys in an electrocardiogram are adopted as the output of the networks. The proposed system then identifies each individual and estimates the heart rate automatically based on the already trained neural networks. The estimation error of the interbeat interval computed using our approach was reduced to 4.5 ms in the best case; and 48.5 ms in the worst case. Experiment results show that the proposed approach significantly outperforms a conventional method. The proposed machine learning approach achieves both personal identification and heart rate estimation simultaneously using UWB radar data for the first time. Moreover, this study found that using the respiration and heartbeat components together may enhance the accuracy of heart rate estimation, which is counter-intuitive, because the respiration is usually believed to interfere with the heartbeat. INDEX TERMS Ultra-wideband radar, heart rate, vital signs, convolutional neural networks.
Computed tomography (CT) and magnetic resonance imaging (MRI) scanners measure three-dimensional (3D) images of patients. However, only low-dimensional local twodimensional (2D) images may be obtained during surgery or radiotherapy. Although computer vision techniques have shown that 3D shapes can be estimated from multiple 2D images, shape reconstruction from a single 2D image such as an endoscopic image or an X-ray image remains a challenge. In this study, we propose X-ray2Shape, which permits a deep learning-based 3D organ mesh to be reconstructed from a single 2D projection image. The method learns the mesh deformation from a mean template and deep features computed from the individual projection images. Experiments with organ meshes and digitally reconstructed radiograph (DRR) images of abdominal regions were performed to confirm the estimation performance of the methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.