Aims Resource‐strained healthcare ecosystems often struggle with the adoption of the World Health Organization (WHO) recommendations for the classification of central nervous system (CNS) tumors. The generation of robust clinical diagnostic aids and the advancement of simple solutions to inform investment strategies in surgical neuropathology would improve patient care in these settings. Methods We used simple information theory calculations on a brain cancer simulation model and real‐world data sets to compare contributions of clinical, histologic, immunohistochemical, and molecular information. An image noise assay was generated to compare the efficiencies of different image segmentation methods in H&E and Olig2 stained images obtained from digital slides. An auto‐adjustable image analysis workflow was generated and compared with neuropathologists for p53 positivity quantification. Finally, the density of extracted features of the nuclei, p53 positivity quantification, and combined ATRX/age feature was used to generate a predictive model for 1p/19q codeletion in IDH ‐mutant tumors. Results Information theory calculations can be performed on open access platforms and provide significant insight into linear and nonlinear associations between diagnostic biomarkers. Age, p53 , and ATRX status have significant information for the diagnosis of IDH ‐mutant tumors. The predictive models may facilitate the reduction of false‐positive 1p/19q codeletion by fluorescence in situ hybridization (FISH) testing. Conclusions We posit that this approach provides an improvement on the cIMPACT‐NOW workflow recommendations for IDH ‐mutant tumors and a framework for future resource and testing allocation.
Evaluation of reactive astrogliosis by neuroanatomical assays represents a common experimental outcome for neuroanatomists. The literature demonstrates several conflicting results as to the accuracy of such measures. We posited that the diverging results within the neuroanatomy literature were due to suboptimal analytical workflows in addition to astrocyte regional heterogeneity. We therefore generated an automated segmentation workflow to extract features of glial fibrillary acidic protein (GFAP) and aldehyde dehydrogenase family 1, member L1 (ALDH1L1) labeled astrocytes with and without neuroinflammation. We achieved this by capturing multiplexed immunofluorescent confocal images of mouse brains treated with either vehicle or lipopolysaccharide (LPS) followed by implementation of our workflows. Using classical image analysis techniques focused on pixel intensity only, we were unable to identify differences between vehicle‐treated and LPS‐treated animals. However, when utilizing machine learning–based algorithms, we were able to (1) accurately predict which objects were derived from GFAP or ALDH1L1‐stained images indicating that GFAP and ALDH1L1 highlight distinct morphological aspects of astrocytes, (2) we could predict which neuroanatomical region the segmented GFAP or ALDH1L1 object had been derived from, indicating that morphological features of astrocytes change as a function of neuroanatomical location. (3) We discovered a statistically significant, albeit not highly accurate, prediction of which objects had come from LPS versus vehicle‐treated animals, indicating that although features exist capable of distinguishing LPS‐treated versus vehicle‐treated GFAP and ALDH1L1‐segmented objects, that significant overlap between morphologies exists. We further determined that for most classification scenarios, nonlinear models were required for improved treatment class designations. We propose that unbiased automated image analysis techniques coupled with well‐validated machine learning tools represent highly useful models capable of providing insights into neuroanatomical assays.
Background Diagnosis of primary brain tumors requires an integrated evaluation of histologic, anatomic and molecular features. Accurate diagnosis is very important for prognosis and treatment. 2016 WHO classification of primary brain tumors required molecular methods for diagnosis of some diffuse gliomas. This increases the difficulty of diagnosis of diffuse gliomas especially for the poor countries and regions that have low availability of molecular tests. The advent of whole slide imaging permits the usage of computer vision tools such as thresholding, deconvolution, and feature extraction. These developments make it possible for us to evaluate various morphological features in the neoplastic and normal cells. Our aim is to develop a Machine Learning (ML) model that is a low‐cost predictor of molecular test results from histomorphological information. To achieve this purpose, in this section we made a workflow for the detection of OLIG2 positive cells. Methods We used color deconvolution and k‐means clustering methods for the cell segmentation on the pictures of astrocytoma and oligodendroglioma slides stained with OLIG2. Then we extracted the morphological features of objects using the EBImage package. We classified the segmented objects as artifact, cell and fused for the ground truth of the classification. We divided the ground truth data into training (70%) and test data (30%). We applied Random Forest (RF) modeling as a machine learning algorithm for the prediction of the objects. We used Boruta function to identify feature importance. All data analysis and segmentation were done using R programming software. Results The deconvolution method for OLIG2 is apparently better than k‐means clustering. K‐means clustering failed to detect cell borders. RF model has high accuracy (88%) and high kappa value (0.8219) for the classification of objects. According to Boruta function, the most important features are features related to shape and pixel intensities. After Boruta selection, RF model has the same accuracy (88%) and lower kappa value (0.7715) for classification of objects (Table 1). We used the same workflow at the 55 brain tumor pictures. We detected 5837 cells in total of 12938 objects. In this section, we developed a successful workflow and we used this workflow for the detection of cells in OLIG2 stained digital slides. Support or Funding Information This study was supported by NIHR01HL132355 Confusion Matrix and Statistics Reference Prediction Artifact Cell Fused Artifact 352 52 2 Cell 74 750 30 Fused 15 13 156 Accuracy 0.8712 95% CI (0.8528, 0.888) No Information Rate 0.5644 P‐Value [Acc> NIR] 2.2e‐16 Kappa 0.7715 Mcnemar’s Test P‐Value 0.0001335 Artifact Cell Fused Sensitivity 0.7982 0.9202 0.8298 Specificity 0.9462 0.8347 0.9777 Pos Pred Value 0.8670 0.8782 0.8478 Neg Pred Value 0.9143 0.8898 0.9746 Prevalence 0.3054 0.5644 0.1302 Detection Rate 0.2438 0.5194 0.1080 Detection Prevalence 0.2812 0.5914 0.1274 Balanced Accuracy 0.8722 0.8775 0.9037
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.