Purpose: Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter-and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. Methods and materials: We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. Results: Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 AE 0.03, 3.94 AE 1.55, 0.60 AE 0.23, and 0.90 AE 0.38 mm, respectively. Conclusion: We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Purpose Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer‐aided diagnosis (CAD). This work aims to develop a deep learning‐based method for breast tumor segmentation using three‐dimensional (3D) ABUS automatically. Methods For breast tumor segmentation in ABUS, we developed a Mask scoring region‐based convolutional neural network (R‐CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R‐CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross‐validation and 30 were used for hold‐out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland–Altman analysis. Results The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross‐validation and hold‐out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([−0.77, 1.71)) for the cross‐validation and 0.23 cc ([−0.23 0.69]) for hold‐out test, respectively. Conclusion We developed a novel Mask scoring R‐CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning‐based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
The cancer cells obtain their invasion potential not only by genetic mutations, but also by changing their cellular biophysical and biomechanical features and adapting to the surrounding microenvironments. The extracellular matrix, as a crucial component of the tumor microenvironment, provides the mechanical support for the tissue, mediates the cell-microenvironment interactions, and plays a key role in cancer cell invasion. The biomechanics of the extracellular matrix, particularly collagen, have been extensively studied in the biomechanics community. Cell migration has also enjoyed much attention from both the experimental and modeling efforts. However, the detailed mechanistic understanding of tumor cell-ECM interactions, especially during cancer invasion, has been unclear. This chapter reviews the recent advances in the studies of ECM biomechanics, cell migration, and cell-ECM interactions in the context of cancer invasion.
Epicardial adipose tissue (EAT) is a visceral fat deposit, that’s known for its association with factors, such as obesity, diabetes mellitus, age, and hypertension. Segmentation of the EAT in a fast and reproducible way is important for the interpretation of its role as an independent risk marker intricate. However, EAT has a variable distribution, and various diseases may affect the volume of the EAT, which can increase the complexity of the already time-consuming manual segmentation work. We propose a 3D deep attention U-Net method to automatically segment the EAT from coronary computed tomography angiography (CCTA). Five-fold cross-validation and hold-out experiments were used to evaluate the proposed method through a retrospective investigation of 200 patients. The automatically segmented EAT volume was compared with physician-approved clinical contours. Quantitative metrics used were the Dice similarity coefficient (DSC), sensitivity, specificity, Jaccard index (JAC), Hausdorff distance (HD), mean surface distance (MSD), residual mean square distance (RMSD), and the center of mass distance (CMD). For cross-validation, the median DSC, sensitivity, and specificity were 92.7%, 91.1%, and 95.1%, respectively, with JAC, HD, CMD, MSD, and RMSD are 82.9% ± 8.8%, 3.77 ± 1.86 mm, 1.98 ± 1.50 mm, 0.37 ± 0.24 mm, and 0.65 ± 0.37 mm, respectively. For the hold-out test, the accuracy of the proposed method remained high. We developed a novel deep learning-based approach for the automated segmentation of the EAT on CCTA images. We demonstrated the high accuracy of the proposed learning-based segmentation method through comparison with ground truth contour of 200 clinical patient cases using 8 quantitative metrics, Pearson correlation, and Bland-Altman analysis. Our automatic EAT segmentation results show the potential of the proposed method to be used in computer-aided diagnosis of coronary artery diseases (CADs) in clinical settings.
Understanding cellular remodeling in response to mechanical stimuli is a critical step in elucidating mechanical activation of biochemical signaling pathways. Experimental evidence indicates that external stress-induced subcellular adaptation is accomplished through dynamic cytoskeletal reorganization. To study the interactions between subcellular structures involved in transducing mechanical signals, we combined experimental data and computational simulations to evaluate real-time mechanical adaptation of the actin cytoskeletal network. Actin cytoskeleton was imaged at the same time as an external tensile force was applied to live vascular smooth muscle cells using a fibronectin-functionalized atomic force microscope probe. Moreover, we performed computational simulations of active cytoskeletal networks under an external tensile force. The experimental data and simulation results suggest that mechanical structural adaptation occurs before chemical adaptation during filament bundle formation: actin filaments first align in the direction of the external force by initializing anisotropic filament orientations, then the chemical evolution of the network follows the anisotropic structures to further develop the bundle-like geometry. Our findings present an alternative two-step explanation for the formation of actin bundles due to mechanical stimulation and provide new insights into the mechanism of mechanotransduction.
A lignin amphoteric surfactant and betaine could enhance the enzymatic hydrolysis of lignocellulose and recover cellulase. The effects of lignosulfonate quaternary ammonium salt (SLQA) and dodecyl dimethyl betaine (BS12) on enzymatic hydrolysis digestibility, ethanol yield, yeast cell viability, and other properties of high-solid enzymatic hydrolysis and fermentation of a corncob residue were studied in this research. The results suggested that SLQA and 1 g/L BS12 effectively improved the ethanol yield through enhancing enzymatic hydrolysis. SLQA had no significant effect on the yeast cell membrane and glucose fermentation. However, 5 g/L BS12 reduced the ethanol yield as a result of the fact that 5 g/L BS12 damaged the yeast cell membrane and inhibited the conversion of glucose to ethanol. Our research also suggested that 1 g/L BS12 enhanced the ethanol yield of corncob residue fermentation, which was attributed to the fact that lignin in the corncob adsorbed BS12 and decreased its concentration in solution to a safe level for the yeast.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.