Automatically detecting and grading cancerous regions on radical prostatectomy (RP) sections facilitates graphical and quantitative pathology reporting, potentially benefitting post-surgery prognosis, recurrence prediction, and treatment planning after RP. Promising results for detecting and grading prostate cancer on digital histopathology images have been reported using machine learning techniques. However, the importance and applicability of those methods have not been fully investigated. We computed three-class tissue component maps (TCMs) from the images, where each pixel was labeled as nuclei, lumina, or other. We applied seven different machine learning approaches: three non-deep learning classifiers with features extracted from TCMs, and four deep learning, using transfer learning with the 1) TCMs, 2) nuclei maps, 3) lumina maps, and 4) raw images for cancer detection and grading on whole-mount RP tissue sections. We performed leave-one-patient-out crossvalidation against expert annotations using 286 whole-slide images from 68 patients. For both cancer detection and grading, transfer learning using TCMs performed best. Transfer learning using nuclei maps yielded slightly inferior overall performance, but the best performance for classifying highergrade cancer. This suggests that 3-class TCMs provide the major cues for cancer detection and grading primarily using nucleus features, which are the most important information for identifying higher-grade cancer. The most used treatment for prostate-cancer (PCa) that is organ-confined is radical prostatectomy (RP), the removal of the prostate gland. Approximately 40% of prostate cancer patients undergo this surgery each year in the United States 1. Serum prostate-specific antigen (PSA) relapse occurs in 17%-29% of patients, reflecting cancer recurrence 2,3. Post-surgery prognosis, recurrence prediction, and selection and guidance for adjuvant therapy are all informed by the surgical pathology report. Typical pathology reports include tumor size, location, spread, and aggressiveness levels. In addition, PCa patients are grouped based on the Gleason score (GS), which is computed as the sum of the primary and secondary Gleason grades 3 at RP, into grade group 1 (GS 6; G3 + 3), grade group 2 (GS 7; G3 + 4), grade group 3 (GS7; G4 + 3), grade group 4 (GS 8; G4 + 4) and grade group 5 (GS 9-10; G4 + 5, G5 + 4, and G5 + 5) disease 4,5 , with treatment determined according to the risk level 6. Thus, although accurate post-RP risk stratification is crucial, currently, clinical pathology reporting is primarily qualitative and subject to intra-and inter-observer variability. This leads to challenges for quantitative and repeatable pathology reporting and interpretation regarding the lesion size, location, spread, and Gleason grade or score 3,7-10 .
Cellular profiling with multiplexed immunofluorescence (MxIF) images can contribute to a more accurate patient stratification for immunotherapy. Accurate cell segmentation of the MxIF images is an essential step. We propose a deep learning pipeline to train a Mask R-CNN model (deep network) for cell segmentation using nuclear (DAPI) and membrane (Na+K+ATPase) stained images. We used two-stage domain adaptation by first using a weakly labeled dataset followed by fine-tuning with a manually annotated dataset. We validated our method against manual annotations on three different datasets. Our method yields comparable results to the multi-observer agreement on an ovarian cancer dataset and improves on state-of-the-art performance on a publicly available dataset of mouse pancreatic tissues. Our proposed method, using a weakly labeled dataset for pre-training, showed superior performance in all of our experiments. When using smaller training sample sizes for fine-tuning, the proposed method provided comparable performance to that obtained using much larger training sample sizes. Our results demonstrate that using two-stage domain adaptation with a weakly labeled dataset can effectively boost system performance, especially when using a small training sample size. We deployed the model as a plug-in to CellProfiler, a widely used software platform for cellular image analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.