Through a systematic analysis of multiparametric MR imaging features, we are able to build models with improved predictive value over conventional imaging metrics. The results are encouraging, suggesting the wealth of imaging radiomics should be further explored to help tailoring the treatment into the era of personalized medicine. Clin Cancer Res; 22(21); 5256-64. ©2016 AACR.
Breast density has been established as an independent risk factor associated with the development of breast cancer. It is known that an increase of mammographic density is associated with an increased cancer risk. Since a mammogram is a projection image, different body position, level of compression, and the x-ray intensity may lead to a large variability in the density measurement. Breast MRI provides strong soft tissue contrast between fibroglandular and fatty tissues, and three-dimensional coverage of the entire breast, thus making it suitable for density analysis. To develop the MRI-based method, the first task is to achieve consistency in segmentation of the breast region from the body. The method included an initial segmentation based on body landmarks of each individual woman, followed by fuzzy C-mean (FCM) classification to exclude air and lung tissue, B-spline curve fitting to exclude chest wall muscle, and dynamic searching to exclude skin. Then, within the segmented breast, the adaptive FCM was used for simultaneous bias field correction and fibroglandular tissue segmentation. The intraoperator and interoperator reproducibility was evaluated using 11 selected cases covering a broad spectrum of breast densities with different parenchymal patterns. The average standard deviation for breast volume and percent density measurements was in the range of 3%-4% among three trials of one operator or among three different operators. The body position dependence was also investigated by performing scans of two healthy volunteers, each at five different positions, and found the variation in the range of 3%-4%. These initial results suggest that the technique based on three-dimensional MRI can achieve reasonable consistency to be applied in longitudinal follow-up studies to detect small changes. It may also provide a reliable method for evaluating the change of breast density for risk management of women, or for evaluating the benefits/risks when considering hormonal replacement therapy or chemoprevention.
Rationale and Objectives-To investigate the feasibility using quantitative morphology/texture features of breast lesions for diagnostic prediction; and to explore the association of computerized features with lesion phenotype appearance on MRI.Materials and Methods-43 malignant/28 benign lesions were used in this study. A systematic approach from automated lesion segmentation, quantitative feature extraction, diagnostic feature selection using artificial neural network (ANN), and lesion classification was carried out. Eight morphological parameters and 10 GLCM (gray level co-occurrence matrices) texture features were obtained from each lesion. The diagnostic performance of selected features to differentiate between malignant and benign lesions was analyzed using the ROC analysis.Results-Six features were selected by ANN using leave-one-out cross validation, including Compactness, NRL Entropy, Volume, Gray Level Entropy, Gray Level Sum Average, and Homogeneity. The area under the ROC curve was 0.86. When dividing the database into half training and half validation set, a classifier of 5 features selected in the half training set achieved AUC of 0.82 in the other half validation set. The selected morphology feature "Compactness" was associated with shape and margin in BI-RADS lexicon, round shape and smooth margin for the benign lesions and more irregular shape for the malignant lesions. The selected texture features were associated with homogeneous/heterogeneous patterns and the enhancement intensity. The malignant lesions had higher intensity and broader distribution in the enhancement histogram (more heterogeneous) compared to the benign ones.Conclusion-Quantitative analysis of morphology/texture features of breast lesions was feasible, and these features could be selected by ANN to form a classifier for differential diagnosis. Establishing the link between computer-based features and visual descriptors defined in BI-RADS lexicon will provide the foundation for the acceptance of quantitative diagnostic features in the development of computer-aided diagnosis (CAD). The CAD for mammography is by far the most mature among all medical imaging analysis systems. It detects abnormalities or suspicious regions, and marks them with different labels indicating different features with varying degrees of malignancy [7][8][9][10]. A great deal of research has also been spent on developing CAD for breast ultrasound [11][12][13]. Given the many more images acquired in MRI compared to mammogram and ultrasound, development of breast MRI CAD is far more challenging, but on the other hand will be very helpful. The currently existing commercial CAD systems for breast MRI, such as CADstream (Confirma Inc. Kirkland, WA) and fTP (CADsciences, White Plains, NY) provide display platforms to show various presentations of the enhanced lesions to assist radiologists' interpretation. The display is mainly based on the enhancement kinetic features, such as the wash-out patterns, of voxels with the percent enhancement above a pre-se...
Patients who show greater reduction in tCho compared with changes in tumor size are more likely to achieve pCR. The change in tumor size halfway through therapy was the most accurate predictor of pCR.
Purpose To improve image quality and computed tomography (CT) number accuracy of daily cone beam CT (CBCT) through a deep learning methodology with generative adversarial network. Methods One hundred and fifty paired pelvic CT and CBCT scans were used for model training and validation. An unsupervised deep learning method, 2.5D pixel‐to‐pixel generative adversarial network (GAN) model with feature mapping was proposed. A total of 12 000 slice pairs of CT and CBCT were used for model training, while ten‐fold cross validation was applied to verify model robustness. Paired CT–CBCT scans from an additional 15 pelvic patients and 10 head‐and‐neck (HN) patients with CBCT images collected at a different machine were used for independent testing purpose. Besides the proposed method above, other network architectures were also tested as: 2D vs 2.5D; GAN model with vs without feature mapping; GAN model with vs without additional perceptual loss; and previously reported models as U‐net and cycleGAN with or without identity loss. Image quality of deep‐learning generated synthetic CT (sCT) images was quantitatively compared against the reference CT (rCT) image using mean absolute error (MAE) of Hounsfield units (HU) and peak signal‐to‐noise ratio (PSNR). The dosimetric calculation accuracy was further evaluated with both photon and proton beams. Results The deep‐learning generated sCTs showed improved image quality with reduced artifact distortion and improved soft tissue contrast. The proposed algorithm of 2.5 Pix2pix GAN with feature matching (FM) was shown to be the best model among all tested methods producing the highest PSNR and the lowest MAE to rCT. The dose distribution demonstrated a high accuracy in the scope of photon‐based planning, yet more work is needed for proton‐based treatment. Once the model was trained, it took 11–12 ms to process one slice, and could generate a 3D volume of dCBCT (80 slices) in less than a second using a NVIDIA GeForce GTX Titan X GPU (12 GB, Maxwell architecture). Conclusion The proposed deep learning algorithm is promising to improve CBCT image quality in an efficient way, thus has a potential to support online CBCT‐based adaptive radiotherapy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.