The early and reliable detection of COVID-19 infected patients is essential to prevent and limit its outbreak. The PCR tests for COVID-19 detection are not available in many countries, and also, there are genuine concerns about their reliability and performance. Motivated by these shortcomings, this article proposes a deep uncertainty-aware transfer learning framework for COVID-19 detection using medical images. Four popular convolutional neural networks (CNNs), including VGG16, ResNet50, DenseNet121, and InceptionResNetV2, are first applied to extract deep features from chest X-ray and computed tomography (CT) images. Extracted features are then processed by different machine learning and statistical modeling techniques to identify COVID-19 cases. We also calculate and report the epistemic uncertainty of classification results to identify regions where the trained models are not confident about their decisions (out of distribution problem). Comprehensive simulation results for X-ray and CT image data sets indicate that linear support vector machine and neural network models achieve the best results as measured by accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC). Also, it is found that predictive uncertainty estimates are much higher for CT images compared to X-ray images.
Motion sickness is a common perturbation experienced by humans in response to motion stimuli. The motion can happen in either real or virtual environments perceived by the vestibular system and visual illusion. The extensive varieties of research studies have been conducted in order to determine and evaluate aspects of motion sickness and its symptoms. To provide insights upon physiological changes in regards to motion sickness, researchers have used subjects from different ages, gender in addition to electrode positions and environmental conditions. The main purpose of this study is to provide a comprehensive review and comparison of the existing research studies regarding aspects of interference of the existence and augmentation of motion sickness. In this paper, we discuss the appearance of symptoms after motion sickness and summarize the physiological behaviors and emotions via a range of scenarios. In addition, the existing methods for measuring motion sickness levels are compared and discussed in detail. This study considers a number of important factors such as age, gender, health condition, participants (non/fatigue or non/drowsiness), road conditions, and different experimental setups impacting the results of motion sickness. Finally, this paper presents a range of practical methods to minimize and prevent the unpleasant side effects of motion sickness. This includes air ventilation, homogenized road/virtual environment features, and providing comfortable setup and pre-movement before visual acceleration. A deeper understanding of changes in physiological signals during vection helps us to confirm the traditional subjective report and also improves our knowledge in the concept the vection. INDEX TERMS Motion sickness, vestibular and visual conflict, vection, eye movement, postural instability, physiological signals.
Background Optical measurement techniques and recent advances in wearable technology have made heart rate (HR) sensing simpler and more affordable. Objectives The Polar OH1 is an arm worn optical heart rate monitor. The objectives of this study are two-fold; 1) to validate the OH1 optical HR sensor with the gold standard of HR measurement, electrocardiography (ECG), over a range of moderate to high intensity physical activities, 2) to validate wearing the OH1 at the temple as an alternative location to its recommended wearing location around the forearm and upper arm. Methods Twenty-four individuals participated in a physical exercise protocol, by walking on a treadmill and riding a stationary spin bike at different speeds while the criterion measure, ECG and Polar OH1 HR were recorded simultaneously at three different body locations; forearm, upper arm and the temple. Time synchronised HR data points were compared using Bland-Altman analyses and intraclass correlation. Results The intraclass correlation between the ECG and Polar OH1, for the aggregated data, was 0.99 and the estimated mean bias ranged 0.27–0.33 bpm for the sensor locations. The three sensors exhibited a 95% limit of agreement (LoA: forearm 5.22, -4.68 bpm; upper arm 5.15, -4.49; temple 5.22, -4.66). The mean of the ECG HR for the aggregated data was 112.15 ± 24.52 bpm. The intraclass correlation of HR values below and above this mean were 0.98 and 0.99 respectively. The reported mean bias ranged 0.38–0.47 bpm (95% LoA: forearm 6.14, -5.38 bpm; upper arm 6.07, -5.13 bpm; temple 6.09, -5.31 bpm), and 0.15–0.16 bpm (95% LoA: forearm 3.99, -3.69 bpm; upper arm 3.90, -3.58 bpm; temple 4.06, -3.76 bpm) respectively. During different exercise intensities, the intraclass correlation ranged 0.95–0.99 for the three sensor locations. During the entire protocol, the estimated mean bias was in the range -0.15–0.55 bpm, 0.01–0.53 bpm and -0.37–0.48 bpm, for the forearm, upper arm and temple locations respectively. The corresponding upper limits of 95% LoA were 3.22–7.03 bpm, 3.25–6.82 bpm and 3.18–7.04 bpm while the lower limits of 95% LoA were -6.36–(-2.35) bpm, -6.46–(-2.30) bpm and -7.42–(-2.41) bpm. Conclusion Polar OH1 demonstrates high level of agreement with the criterion measure ECG HR, thus can be used as a valid measure of HR in lab and field settings during moderate and high intensity physical activities.
Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles. Since the annotations of point cloud data is an expensive and time-consuming process, therefore recently the utilisation of simulated environments and 3D LiDAR sensors for this task started to get some popularity. With simulated sensors and environments, the process for obtaining an annotated synthetic point cloud data became much easier. However, the generated synthetic point cloud data are still missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors. As a result, the performance of the trained models on this data for perception tasks when tested on real point cloud data is degraded due to the domain shift between simulated and real environments. Thus, in this work, we are proposing a domain adaptation framework for bridging this gap between synthetic and real point cloud data. Our proposed framework is based on the deep cycle-consistent generative adversarial networks (CycleGAN) architecture. We have evaluated the performance of our proposed framework on the task of vehicle detection from a bird's eye view (BEV) point cloud images coming from real 3D LiDAR sensors. The framework has shown competitive results with an improvement of more than 7% in average precision score over other baseline approaches when tested on real BEV point cloud images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.