Prostate cancer is the most common noncutaneous cancer in men in the United States. The current paradigm for screening and diagnosis is imperfect, with relatively low specificity, high cost, and high morbidity. This study aims to generate new image contrasts by learning a distribution of unique image signatures associated with prostate cancer. In total, 48 patients were prospectively recruited for this institutional review board–approved study. Patients underwent multiparametric magnetic resonance imaging 2 weeks before surgery. Postsurgical tissues were annotated by a pathologist and aligned to the in vivo imaging. Radiomic profiles were generated by linearly combining 4 image contrasts (T2, apparent diffusion coefficient [ADC] 0-1000, ADC 50-2000, and dynamic contrast-enhanced) segmented using global thresholds. The distribution of radiomic profiles in high-grade cancer, low-grade cancer, and normal tissues was recorded, and the generated probability values were applied to a naive test set. The resulting Gleason probability maps were stable regardless of training cohort, functioned independent of prostate zone, and outperformed conventional clinical imaging (area under the curve [AUC] = 0.79). Extensive overlap was seen in the most common image signatures associated with high- and low-grade cancer, indicating that low- and high-grade tumors present similarly on conventional imaging.
Purpose: Prostate cancer primarily arises from the glandular epithelium. Histomophometric techniques have been used to assess the glandular epithelium in automated detection and classification pipelines; however, they are often rigid in their implementation, and their performance suffers on large datasets where variation in staining, imaging, and preparation is difficult to control. The purpose of this study is to quantify performance of a pixelwise segmentation algorithm that was trained using different combinations of weak and strong stroma, epithelium, and lumen labels in a prostate histology dataset. Approach: We have combined weakly labeled datasets generated using simple morphometric techniques and high-quality labeled datasets from human observers in prostate biopsy cores to train a convolutional neural network for use in whole mount prostate labeling pipelines. With trained networks, we characterize pixelwise segmentation of stromal, epithelium, and lumen (SEL) regions on both biopsy core and whole-mount H&E-stained tissue. Results: We provide evidence that by simply training a deep learning algorithm on weakly labeled data generated from rigid morphometric methods, we can improve the robustness of classification over the morphometric methods used to train the classifier. Conclusions: We show that not only does our approach of combining weak and strong labels for training the CNN improve qualitative SEL labeling within tissue but also the deep learning generated labels are superior for cancer classification in a higher-order algorithm over the morphometrically derived labels it was trained on.
Background and Objective: Stress urinary incontinence (SUI) can occur due to a variety of etiologies.For male patients specifically, SUI is typically thought of as iatrogenic secondary to intrinsic sphincter deficiency occurring after prostate surgery. Given the noted negative impact that SUI can have on a man's quality of life, multiple treatment options have been developed to improve symptoms. However, there is no "One-Size-Fits-All" approach to management of male SUI. In this narrative review, we sought to highlight some of the various procedures and devices available to treat men with bothersome SUI.Methods: This narrative review gathered primary resources through Medline search, and secondary resources by cross-referencing citations used in articles of interest. We started our investigation by searching for previous systematic reviews on male SUI and treatments for male SUI. Furthermore, we reviewed societal guidelines, such as the American Urological Association and Society of Urodynamics, Female Pelvic Medicine and Urogenital Reconstruction guidelines and the recently published European Urological Association guidelines. Our review focused on English-language full-length manuscripts when available.
require a penile prosthesis. Surgical Complications: two patients (12.5%) showed postoperative neurapraxia, which resolved spontaneously after 12 months (Clavien I), and two other patients (12.5%) suffered extrusion of the receiver block (Clavien IIIb) We observed malfunction/damage of the external hardware in 10pt (62.5%), mostly related to operator misuse.CONCLUSIONS: Success of SARS was 87.5 % for micturition, 94% for defecation and 33% for intercourse. The main limitations were observed in caregiver-dependent quadriplegics due to operator misuse, resulting in damage of the external hardware or symptomatic UTIs. Loss of reflex erections was an undesired side effect in sexually active patients. Although SARS can elicit erections by stimulation, the system may not be reliable enough for intercourse. According to this, we believe the best candidate for SARS is an adult-able independent paraplegic female and also male paraplegics not concerned with loss of reflex erections.
Prostate cancer (PCa) arises from the glandular epithelium. To confirm the presence of cancer, a pathologist uses stained tissue samples taken either from biopsy or prostatectomy. Due to the relationship between epithelium and cancer, special consideration is given to regions identified as epithelium. Histomophometric techniques have long been used to identify areas of epithelium within the tissue for automated detection and classification pipelines; however, they are often rigid in their implementation and their performance suffers on large datasets where variation in staining, imaging, and preparation is difficult to control. The recent development and popularity of deep learning methods for image processing and segmentation offer great promise for developing robust classification pipelines for such ends; however, they require large labeled datasets for training.The goal of this study was to combine weakly labeled datasets generated using histomophometric techniques, and high‐quality labeled datasets from human observers to train a convolutional neural network. In doing so we developed a pixel‐wise segmentation algorithm for classification of stromal, epithelium, and lumen (SEL) regions for use on both biopsy core and whole‐mount bright‐field H&E stained tissue. We provide evidence that by simply training a deep learning algorithm on weakly labeled data, we can improve the robustness of the classification. Finally, we show that not only does this method carry primary improvement on SEL labeling within tissue, but that the information provided by the deep learning generated labels improves cancer classification in a higher‐order algorithm over the histomorphometric labels that it was trained on.Support or Funding InformationFunding was provided by the State of Wisconsin Tax Check‐off Program for Prostate Cancer Research (RO1CA218144 and R01CA113580) and the National Center for Advancing Translational Sciences (NIH UL1TR001436 and TL1TR001437).This abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.