Diagnosing Parkinson's disease is a complex task that requires the evaluation of several motor and non-motor symptoms. During diagnosis, gait abnormalities are among the important symptoms that physicians should consider. However, gait evaluation is challenging and relies on the expertise and subjectivity of clinicians. In this context, the use of an intelligent gait analysis algorithm may assist physicians in order to facilitate the diagnosis process. This paper proposes a novel intelligent Parkinson detection system based on deep learning techniques to analyze gait information. We used 1D convolutional neural network (1D-Convnet) to build a Deep Neural Network (DNN) classifier. The proposed model processes 18 1D-signals coming from foot sensors measuring the vertical ground reaction force (VGRF). The first part of the network consists of 18 parallel 1D-Convnet corresponding to system inputs. The second part is a fully connected network that connects the concatenated outputs of the 1D-Convnets to obtain a final classification. the severity of the disease with the Unified Parkinson's Disease Rating Scale (UPDRS). Our experiments demonstrate the high efficiency of the proposed method in the detection of Parkinson disease based on gait data. The proposed algorithm achieved an accuracy of 98.7%. To our knowledge, this is the state-of-the-start performance in Parkinson's gait recognition. Furthermore, we achieved an accuracy of 85.3% in Parkinson's severity prediction.To the best of our knowledge, this is the first algorithm to perform a severity prediction based on the UPDRS.These results show that the model is able to learn intrinsic characteristics from gait data and to generalize to unseen subjects, which could be helpful in a clinical diagnosis.
Assessment of respiratory activity in pediatric intensive care unit allows a comprehensive view of the patient's condition. This allows the identification of high-risk cases for prompt and appropriate medical treatment. Numerous research works on respiration monitoring have been conducted in recent years. However, most of them are unsuitable for clinical environment or require physical contact with the patient, which limits their efficiency. In this paper, we present a novel system for measuring the breathing pattern based on a computer vision method and contactless design. Our 3D imaging system is specifically designed for pediatric intensive care environment, which distinguishes it from the other imaging methods. Indeed, previous works are mostly limited to the use of conventional video acquisition devices, in addition to not considering the constraints imposed by intensive care environment. The proposed system uses depth information captured by two (Red Green Blue-Depth) RGB-D cameras at different view angles, by considering the intensive care unit constraints. Depth information is then exploited to reconstruct a 3D surface of a patient's torso with high temporal and spatial resolution and large spatial coverage. Our system captures the motion information for the top of the torso surface as well as for its both lateral sides. For each reconstruction, the volume is estimated through a recursive subdivision of the 3D space into cubic unit elements. The volume change is then calculated through a subtraction technique between successive reconstructions. We tested our system in the pediatric intensive care unit of the Sainte-Justine university hospital center, where it was compared to the gold standard method currently used in pediatric intensive care units. The performed experiments showed a very high accuracy and precision of the proposed imaging system in estimating respiratory rate and tidal volume.
Background In an intensive care units, experts in mechanical ventilation are not continuously at patient’s bedside to adjust ventilation settings and to analyze the impact of these adjustments on gas exchange. The development of clinical decision support systems analyzing patients’ data in real time offers an opportunity to fill this gap. Objective The objective of this study was to determine whether a machine learning predictive model could be trained on a set of clinical data and used to predict transcutaneous hemoglobin oxygen saturation 5 min ( 5min SpO 2 ) after a ventilator setting change. Data sources Data of mechanically ventilated children admitted between May 2015 and April 2017 were included and extracted from a high-resolution research database. More than 776,727 data rows were obtained from 610 patients, discretized into 3 class labels (< 84%, 85% to 91% and c92% to 100%). Performance metrics of predictive models Due to data imbalance, four different data balancing processes were applied. Then, two machine learning models (artificial neural network and Bootstrap aggregation of complex decision trees) were trained and tested on these four different balanced datasets. The best model predicted SpO 2 with area under the curves < 0.75. Conclusion This single center pilot study using machine learning predictive model resulted in an algorithm with poor accuracy. The comparison of machine learning models showed that bagged complex trees was a promising approach. However, there is a need to improve these models before incorporating them into a clinical decision support systems. One potentially solution for improving predictive model, would be to increase the amount of data available to limit over-fitting that is potentially one of the cause for poor classification performances for 2 of the three class labels.
Tracking with a Pan-Tilt-Zoom (PTZ) camera has been a research topic in computer vision for many years. However, it is very difficult to assess the progress that has been made on this topic because there is no standard evaluation methodology. The difficulty in evaluating PTZ tracking algorithms arises from their dynamic nature. In contrast to other forms of tracking, PTZ tracking involves both locating the target in the image and controlling the motors of the camera to aim it so that the target stays in its field of view. This type of tracking can only be performed online. In this paper, we propose a new evaluation framework based on a virtual PTZ camera. With this framework, tracking scenarios do not change for each experiment and we are able to replicate online PTZ camera control and behavior including camera positioning delays, tracker processing delays, and numerical zoom. We tested our evaluation framework with the Camshift tracker to show its viability and to establish baseline results.
Phenology has become a field of growing importance due to the increasingly apparent impacts of climate change. However, the time-consuming, subjective and tedious nature of traditional human field observations have hindered the development of large-scale phenology networks. Such networks are rare and rely on time-lapse cameras and simplistic color indexes to monitor phenology. To automatize rapid, detailed and repeatable analyzes, we propose an Artificial Intelligence (AI) framework based on machine learning and computer vision techniques. Our approach extracts multiple ecologically-relevant indicators from time-lapse digital photography datasets. The proposed framework consists of three main components: (i) a random forest model to automatically select relevant images based on color information; (ii) a convolutional neural network (CNN) to identify and localize open tree buds; and (iii) a density-based spatial clustering algorithm to cluster open bud detections across the time-series. We tested this framework on a dataset including thousands of black spruce and balsam fir tree images captured using our phenological camera network. The performed experiments showed the efficiency of the proposed approach under challenging perturbation factors, such as significant image noise. Our framework is exceedingly faster and more accurate than human analysts, reducing the time-series processing time from multiple days to under an hour. The proposed methodology is particularly appropriate for large-scale and long-term analyzes of ecological imagery datasets. Our work demonstrates that the use of computer vision and machine learning methods represents a promising direction for the implementation of national, continental, or even global plant phenology networks. INDEX TERMS Balsam fir, black spruce, computer vision, convolutional neural network, deep learning, forest ecology, object detection, tree budburst.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.