The dynamic contrast-enhanced MR imaging plays a crucial role in evaluating the effectiveness of neoadjuvant chemotherapy (NAC) even since its early stage through the prediction of the final pathological complete response (pCR). In this study, we proposed a transfer learning approach to predict if a patient achieved pCR (pCR) or did not (non-pCR) by exploiting, separately or in combination, pre-treatment and early-treatment exams from I-SPY1 TRIAL public database. First, low-level features, i.e., related to local structure of the image, were automatically extracted by a pre-trained convolutional neural network (CNN) overcoming manual feature extraction. Next, an optimal set of most stable features was detected and then used to design an SVM classifier. A first subset of patients, called fine-tuning dataset (30 pCR; 78 non-pCR), was used to perform the optimal choice of features. A second subset not involved in the feature selection process was employed as an independent test (7 pCR; 19 non-pCR) to validate the model. By combining the optimal features extracted from both pre-treatment and early-treatment exams with some clinical features, i.e., ER, PgR, HER2 and molecular subtype, an accuracy of 91.4% and 92.3%, and an AUC value of 0.93 and 0.90, were returned on the fine-tuning dataset and the independent test, respectively. Overall, the low-level CNN features have an important role in the early evaluation of the NAC efficacy by predicting pCR. The proposed model represents a first effort towards the development of a clinical support tool for an early prediction of pCR to NAC.
Contrast-enhanced spectral mammography (CESM) is an advanced instrument for breast care that is still operator dependent. The aim of this paper is the proposal of an automated system able to discriminate benign and malignant breast lesions based on radiomic analysis. We selected a set of 58 regions of interest (ROIs) extracted from 53 patients referred to Istituto Tumori “Giovanni Paolo II” of Bari (Italy) for the breast cancer screening phase between March 2017 and June 2018. We extracted 464 features of different kinds, such as points and corners of interest, textural and statistical features from both the original ROIs and the ones obtained by a Haar decomposition and a gradient image implementation. The features data had a large dimension that can affect the process and accuracy of cancer classification. Therefore, a classification scheme for dimension reduction was needed. Specifically, a principal component analysis (PCA) dimension reduction technique that includes the calculation of variance proportion for eigenvector selection was used. For the classification method, we trained three different classifiers, that is a random forest, a naïve Bayes and a logistic regression, on each sub-set of principal components (PC) selected by a sequential forward algorithm. Moreover, we focused on the starting features that contributed most to the calculation of the related PCs, which returned the best classification models. The method obtained with the aid of the random forest classifier resulted in the best prediction of benign/malignant ROIs with median values for sensitivity and specificity of 88.37% and 100%, respectively, by using only three PCs. The features that had shown the greatest contribution to the definition of the same were almost all extracted from the LE images. Our system could represent a valid support tool for radiologists for interpreting CESM images.
Cell-cell interactions are an observable manifestation of underlying complex biological processes occurring in response to diversified biochemical stimuli. Recent experiments with microfluidic devices and live cell imaging show that it is possible to characterize cell kinematics via computerized algorithms and unravel the effects of targeted therapies. We study the influence of spatial and temporal resolutions of time-lapse videos on motility and interaction descriptors with computational models that mimic the interaction dynamics among cells. We show that the experimental set-up of time-lapse microscopy has a direct impact on the cell tracking algorithm and on the derived numerical descriptors. We also show that, when comparing kinematic descriptors in two diverse experimental conditions, too low resolutions may alter the descriptors’ discriminative power, and so the statistical significance of the difference between the two compared distributions. The conclusions derived from the computational models were experimentally confirmed by a series of video-microscopy acquisitions of co-cultures of unlabelled human cancer and immune cells embedded in 3D collagen gels within microfluidic devices. We argue that the experimental protocol of acquisition should be adapted to the specific kind of analysis involved and to the chosen descriptors in order to derive reliable conclusions and avoid biasing the interpretation of results.
We describe a novel method to achieve a universal, massive, and fully automated analysis of cell motility behaviours, starting from time-lapse microscopy images. the approach was inspired by the recent successes in application of machine learning for style recognition in paintings and artistic style transfer. the originality of the method relies i) on the generation of atlas from the collection of single-cell trajectories in order to visually encode the multiple descriptors of cell motility, and ii) on the application of pre-trained Deep Learning convolutional neural network architecture in order to extract relevant features to be used for classification tasks from this visual atlas. Validation tests were conducted on two different cell motility scenarios: 1) a 3D biomimetic gels of immune cells, co-cultured with breast cancer cells in organ-on-chip devices, upon treatment with an immunotherapy drug; 2) Petri dishes of clustered prostate cancer cells, upon treatment with a chemotherapy drug. for each scenario, single-cell trajectories are very accurately classified according to the presence or not of the drugs. This original approach demonstrates the existence of universal features in cell motility (a so called "motility style") which are identified by the DL approach in the rationale of discovering the unknown message in cell trajectories. Cell motility is fundamental for life, along the entire evolutionary tree, being involved in bacteria collective motion 1 , in the morphogenesis of pluricellular organisms 2 , in adult physiological process (such as tissue repair and immune cell trafficking) 3 and in some pathologies (such as cancer metastasis) 4-7. Nature evolved a variety of cell motility modes, single-cell or collective, mesenchymal or amoeboid, random or directed, etc. Yet, since the driving force of cell motility is always the active reorganization of the cellular cytoskeleton, it is reasonable to assume that some universal principles of cell motility behaviours have been conserved. We applied machine learning approach to explore this hypothesis exploiting Deep Learning (DL) architecture, by presenting a novel tool called Deep Tracking. DL is a recent machine learning framework 8 developed on the basis of the human brain machine. DL technique learns how to extract the "style" of an atlas of digital images (like the style from an atlas of an artist's paintings 9,10) in order to represent a given set of pictures in terms of most relevant quantitative descriptors (i.e., features) 8. We addressed the question of whether DL could be proficient in extracting the motility styles, i.e. the paintings drawn by cells while moving. Typically, cell motility experiments use time-lapse microscopy imaging (Fig. 1). Starting from the image stacks (Fig. 1A), video processing methods are used to track cell trajectories (see the description of the Cell Hunter tool 11,12 in Steps 2 and 3, Methods section) (Fig. 1B). The first step of our Deep Tracking method relies on the assembly of the individual cell tracks collected for e...
In breast cancer patients, an accurate detection of the axillary lymph node metastasis status is essential for reducing distant metastasis occurrence probabilities. In case of patients resulted negative at both clinical and instrumental examination, the nodal status is commonly evaluated performing the sentinel lymph-node biopsy, that is a time-consuming and expensive intraoperative procedure for the sentinel lymph-node (SLN) status assessment. The aim of this study was to predict the nodal status of 142 clinically negative breast cancer patients by means of both clinical and radiomic features extracted from primary breast tumor ultrasound images acquired at diagnosis. First, different regions of interest (ROIs) were segmented and a radiomic analysis was performed on each ROI. Then, clinical and radiomic features were evaluated separately developing two different machine learning models based on an SVM classifier. Finally, their predictive power was estimated jointly implementing a soft voting technique. The experimental results showed that the model obtained by combining clinical and radiomic features provided the best performances, achieving an AUC value of 88.6%, an accuracy of 82.1%, a sensitivity of 100% and a specificity of 78.2%. The proposed model represents a promising non-invasive procedure for the SLN status prediction in clinically negative patients.
Cancer treatment planning benefits from an accurate early prediction of the treatment efficacy. The goal of this study is to give an early prediction of three-year Breast Cancer Recurrence (BCR) for patients who underwent neoadjuvant chemotherapy. We addressed the task from a new perspective based on transfer learning applied to pre-treatment and early-treatment DCE-MRI scans. Firstly, low-level features were automatically extracted from MR images using a pre-trained Convolutional Neural Network (CNN) architecture without human intervention. Subsequently, the prediction model was built with an optimal subset of CNN features and evaluated on two sets of patients from I-SPY1 TRIAL and BREAST-MRI-NACT-Pilot public databases: a fine-tuning dataset (70 not recurrent and 26 recurrent cases), which was primarily used to find the optimal subset of CNN features, and an independent test (45 not recurrent and 17 recurrent cases), whose patients had not been involved in the feature selection process. The best results were achieved when the optimal CNN features were augmented by four clinical variables (age, ER, PgR, HER2+), reaching an accuracy of 91.7% and 85.2%, a sensitivity of 80.8% and 84.6%, a specificity of 95.7% and 85.4%, and an AUC value of 0.93 and 0.83 on the fine-tuning dataset and the independent test, respectively. Finally, the CNN features extracted from pre-treatment and early-treatment exams were revealed to be strong predictors of BCR.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.