This paper presents the results of a pilot study to assess the feasibility of using accelerometer data to estimate the severity of symptoms and motor complications in patients with Parkinson’s disease. A Support Vector Machine (SVM) classifier was implemented to estimate the severity of tremor, bradykinesia and dyskinesia from accelerometer data features. SVM-based estimates were compared with clinical scores derived via visual inspection of video recordings taken while patients performed a series of standardized motor tasks. The analysis of the video recordings was performed by clinicians trained in the use of scales for the assessment of the severity of Parkinsonian symptoms and motor complications. Results derived from the accelerometer time series were analyzed to assess the effect on the estimation of clinical scores of the duration of the window utilized to derive segments (to eventually compute data features) from the accelerometer data, the use of different support vector machine kernels and misclassification cost values, and the use of data features derived from different motor tasks. Results were also analyzed to assess which combinations of data features carried enough information to reliably assess the severity of symptoms and motor complications. Combinations of data features were compared taking into consideration the computational cost associated with estimating each data feature on the nodes of a body sensor network and the effect of using such data features on the reliability of SVM-based estimates of the severity of Parkinsonian symptoms and motor complications.
; for the Imaging and Informatics in Retinopathy of Prematurity (i-ROP) Research Consortium IMPORTANCE Retinopathy of prematurity (ROP) is a leading cause of childhood blindness worldwide. The decision to treat is primarily based on the presence of plus disease, defined as dilation and tortuosity of retinal vessels. However, clinical diagnosis of plus disease is highly subjective and variable. OBJECTIVE To implement and validate an algorithm based on deep learning to automatically diagnose plus disease from retinal photographs. DESIGN, SETTING, AND PARTICIPANTS A deep convolutional neural network was trained using a data set of 5511 retinal photographs. Each image was previously assigned a reference standard diagnosis (RSD) based on consensus of image grading by 3 experts and clinical diagnosis by 1 expert (ie, normal, pre-plus disease, or plus disease). The algorithm was evaluated by 5-fold cross-validation and tested on an independent set of 100 images. Images were collected from 8 academic institutions participating in the Imaging and Informatics in ROP (i-ROP) cohort study. The deep learning algorithm was tested against 8 ROP experts, each of whom had more than 10 years of clinical experience and more than 5 peer-reviewed publications about ROP.
The classical view of emotion hypothesizes that certain emotion categories have a specific autonomic nervous system (ANS) "fingerprint" that is distinct from other categories. Substantial ANS variation within a category is presumed to be epiphenomenal. The theory of constructed emotion hypothesizes that an emotion category is a population of context-specific, highly variable instances that need not share an ANS fingerprint. Instead, ANS variation within a category is a meaningful part of the nature of emotion. We present a meta-analysis of 202 studies measuring ANS reactivity during lab-based inductions of emotion in nonclinical samples of adults, using a random effects, multilevel meta-analysis and multivariate pattern classification analysis to test our hypotheses. We found increases in mean effect size for 59.4% of ANS variables across emotion categories, but the pattern of effect sizes did not clearly distinguish 1 emotion category from another. We also observed significant variation within emotion categories; heterogeneity accounted for a moderate to substantial percentage (i.e., I2 ≥ 30%) of variability in 54% of these effect sizes. Experimental moderators epiphenomenal to emotion, such as induction type (e.g., films vs. imagery), did not explain a large portion of the variability. Correction for publication bias reduced estimated effect sizes even further, increasing heterogeneity of effect sizes for certain emotion categories. These findings, when considered in the broader empirical literature, are more consistent with population thinking and other principles from evolutionary biology found within the theory of constructed emotion, and offer insights for developing new hypotheses to understand the nature of emotion. (PsycINFO Database Record
In this paper, we identify two issues involved in developing an automated feature subset selection algorithm for unlabeled data: the need for finding the number of clusters in conjunction with feature selection, and the need for normalizing the bias of feature selection criteria with respect to dimension. We explore the feature selection problem and these issues through FSSEM (Feature Subset Selection using Expectation-Maximization (EM) clustering) and through two different performance criteria for evaluating candidate feature subsets: scatter separability and maximum likelihood. We present proofs on the dimensionality biases of these feature criteria, and present a cross-projection normalization scheme that can be applied to any criterion to ameliorate these biases. Our experiments show the need for feature selection, the need for addressing these two issues, and the effectiveness of our proposed solutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.