Non-small-cell lung cancer (NSCLC) represents approximately 80–85% of lung cancer diagnoses and is the leading cause of cancer-related death worldwide. Recent studies indicate that image-based radiomics features from positron emission tomography/computed tomography (PET/CT) images have predictive power for NSCLC outcomes. To this end, easily calculated functional features such as the maximum and the mean of standard uptake value (SUV) and total lesion glycolysis (TLG) are most commonly used for NSCLC prognostication, but their prognostic value remains controversial. Meanwhile, convolutional neural networks (CNN) are rapidly emerging as a new method for cancer image analysis, with significantly enhanced predictive power compared to hand-crafted radiomics features. Here we show that CNNs trained to perform the tumor segmentation task, with no other information than physician contours, identify a rich set of survival-related image features with remarkable prognostic value. In a retrospective study on pre-treatment PET-CT images of 96 NSCLC patients before stereotactic-body radiotherapy (SBRT), we found that the CNN segmentation algorithm (U-Net) trained for tumor segmentation in PET and CT images, contained features having strong correlation with 2- and 5-year overall and disease-specific survivals. The U-Net algorithm has not seen any other clinical information (e.g. survival, age, smoking history, etc.) than the images and the corresponding tumor contours provided by physicians. In addition, we observed the same trend by validating the U-Net features against an extramural data set provided by Stanford Cancer Institute. Furthermore, through visualization of the U-Net, we also found convincing evidence that the regions of metastasis and recurrence appear to match with the regions where the U-Net features identified patterns that predicted higher likelihoods of death. We anticipate our findings will be a starting point for more sophisticated non-intrusive patient specific cancer prognosis determination. For example, the deep learned PET/CT features can not only predict survival but also visualize high-risk regions within or adjacent to the primary tumor and hence potentially impact therapeutic outcomes by optimal selection of therapeutic strategy or first-line therapy adjustment.
Falls among the elderly population cause detrimental physical, mental, financial problems and, in the worst case, death. The increasing number of people entering the higher risk age-range has increased clinicians’ attention to intervene. Clinical tools, e.g., the Timed Up and Go (TUG) test, have been created for aiding clinicians in fall-risk assessment. Often simple to evaluate, these assessments are subject to a clinician’s judgment. Wearable sensor data with machine learning algorithms were introduced as an alternative to precisely quantify ambulatory kinematics and predict prospective falls. However, they require a long-term evaluation of large samples of subjects’ locomotion and complex feature engineering of sensor kinematics. Therefore, it is critical to build an objective fall-risk detection model that can efficiently measure biometric risk factors with minimal costs. We built and studied a sensor data-driven convolutional neural network model to predict older adults’ fall-risk status with relatively high sensitivity to geriatrician’s expert assessment. The sample in this study is representative of older patients with multiple co-morbidity seen in daily medical practice. Three non-intrusive wearable sensors were used to measure participants’ gait kinematics during the TUG test. This data collection ensured convenient capture of various gait impairment aspects at different body locations.
Machine learning was applied to classify tension-strain curves harvested from inflation tests on ascending thoracic aneurysm samples. The curves were classified into rupture and nonrupture groups using prerupture response features. Two groups of features were used as the basis for classification. The first was the constitutive parameters fitted from the tension-strain data, and the second was geometric parameters extracted from the tension-strain curve. Based on the importance scores provided by the machine learning, implications of some features were interrogated. It was found that (1) the value of a constitutive parameter is nearly the same for all members in the rupture group and (2) the strength correlates strongly with a tension in the early phase of response as well as with the end stiffness. The study suggests that the strength, which is not available without rupturing the tissue, may be indirectly inferred from prerupture response features.
Measurement of neuronal size is challenging due to their complex histology. Current practice includes manual or pseudo-manual measurement of somatic areas, which is labor-intensive and prone to human biases and intra-/inter-observer variances. We developed a novel high-throughput neuronal morphology analysis framework (ANMAF), using convolutional neural networks (CNN) to automatically contour the somatic area of fluorescent neurons in acute brain slices. Our results demonstrate considerable agreements between human annotators and ANMAF on detection, segmentation, and the area of somatic regions in neurons expressing a genetically encoded fluorophore. However, in contrast to humans, who exhibited significant variability in repeated measurements, ANMAF produced consistent neuronal contours. ANMAF was generalizable across different imaging protocols and trainable even with a small number of humanly labeled neurons. Our framework can facilitate more rigorous and quantitative studies of neuronal morphology by enabling the segmentation of many fluorescent neurons in thick brain slices in a standardized manner.
A robust and informative local shape descriptor plays an important role in mesh registration. In this regard, spectral descriptors that are based on the spectrum of the Laplace-Beltrami operator have gained a spotlight among the researchers for the last decade due to their desirable properties, such as isometry invariance. Despite such, however, spectral descriptors often fail to give a correct similarity measure for non-isometric cases where the metric distortion between the models is large. Hence, they are in general not suitable for the registration problems, except for the special cases when the models are near-isometry. In this paper, we investigate a way to develop shape descriptors for non-isometric registration tasks by embedding the spectral shape descriptors into a different metric space where the Euclidean distance between the elements directly indicates the geometric dissimilarity. We design and train a Siamese deep neural network to find such an embedding, where the embedded descriptors are promoted to rearrange based on the geometric similarity. We found our approach can significantly enhance the performance of the conventional spectral descriptors for the non-isometric registration tasks, and outperforms recent state-of-the-art methods reported in literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.