BackgroundDigital breast tomosynthesis (DBT) has gained popularity as breast imaging modality due to its pseudo‐3D reconstruction and improved accuracy compared to digital mammography. However, DBT faces challenges in image quality and quantitative accuracy due to scatter radiation. Recent advancements in deep learning (DL) have shown promise in using fast convolutional neural networks for scatter correction, achieving comparable results to Monte Carlo (MC) simulations.PurposeTo predict the scatter radiation signal in DBT projections within clinically‐acceptable times and using only clinically‐available data, such as compressed breast thickness and acquisition angle.MethodsMC simulations to obtain scatter estimates were generated from two types of digital breast phantoms. One set consisted of 600 realistically‐shaped homogeneous breast phantoms for initial DL training. The other set was composed of 80 anthropomorphic phantoms, containing realistic internal tissue texture, aimed at fine tuning the DL model for clinical applications. The MC simulations generated scatter and primary maps per projection angle for a wide‐angle DBT system. Both datasets were used to train (using 7680 projections from homogeneous phantoms), validate (using 960 and 192 projections from the homogeneous and anthropomorphic phantoms, respectively), and test (using 960 and 48 projections from the homogeneous and anthropomorphic phantoms, respectively) the DL model. The DL output was compared to the corresponding MC ground truth using both quantitative and qualitative metrics, such as mean relative and mean absolute relative differences (MRD and MARD), and to previously‐published scatter‐to‐primary (SPR) ratios for similar breast phantoms. The scatter corrected DBT reconstructions were evaluated by analyzing the obtained linear attenuation values and by visual assessment of corrected projections in a clinical dataset. The time required for training and prediction per projection, as well as the time it takes to produce scatter‐corrected projection images, were also tracked.ResultsThe quantitative comparison between DL scatter predictions and MC simulations showed a median MRD of 0.05% (interquartile range (IQR), −0.04% to 0.13%) and a median MARD of 1.32% (IQR, 0.98% to 1.85%) for homogeneous phantom projections and a median MRD of −0.21% (IQR, −0.35% to −0.07%) and a median MARD of 1.43% (IQR, 1.32% to 1.66%) for the anthropomorphic phantoms. The SPRs for different breast thicknesses and at different projection angles were within ± 15% of the previously‐published ranges. The visual assessment showed good prediction capabilities of the DL model with a close match between MC and DL scatter estimates, as well as between DL‐based scatter corrected and anti‐scatter grid corrected cases. The scatter correction improved the accuracy of the reconstructed linear attenuation of adipose tissue, reducing the error from −16% and −11% to −2.3% and 4.4% for an anthropomorphic digital phantom and clinical case with similar breast thickness, respectively. The DL model training took 40 min and prediction of a single projection took less than 0.01 s. Generating scatter corrected images took 0.03 s per projection for clinical exams and 0.16 s for one entire projection set.ConclusionsThis DL‐based method for estimating the scatter signal in DBT projections is fast and accurate, paving the way for future quantitative applications.
High breast density (BD) is recognized as an independent risk factor for breast cancer development, in addition to negatively impacting the sensitivity of mammography. Although BD is normally assessed with the BI-RADS reporting system, this evaluation is qualitative and has been shown to vary considerably across readers. In this pilot study, we present a deep learning (DL) method to quantify BD from a standard two-view (cranio-caudal, and medio-lateral-oblique) mammography exam. With the aim of developing a method based on an objective ground truth, the DL model was trained and validated using 88 simulated mammograms from an equal number of distinct 3D digital breast phantoms for which BD is known. The phantoms had been previously generated through segmentation and simulated mechanical compression of patient dedicated breast CT images, allowing for the exact calculation of BD in each case. Different data augmentations were applied prior to simulation, to increase the dataset size, yielding a total of 528 cases. These were divided, randomly and on a patient level, into training (N=360), validation (N=60), and test sets (N=108). The DL model performance was tested by stratifying the breasts into four different density ranges: 1-15%, 15-25%, 25-60%, and >60%. The median absolute errors and interquartile ranges (IQR), in percentage points, were 3.3 (IQR: 3.5), 3.4 (IQR: 2.5), 3.5 (IQR: 3.9), and 14.8 (IQR: 8.4), respectively. Although preliminary, these results show the potential of the proposed approach for accurate BD quantification, which is based, as opposed to most previously proposed approaches, on an objective ground truth. SUMMARYIn this work we present a deep learning (DL) based method to estimate breast density from simulated digital mammograms using the two standard views of a mammographic exam (cranio-caudal and medio-lateral oblique) as the main inputs to the DL model. For training and validating the DL model, ray-traced mammograms simulated from patient-based 3D digital breast phantoms (with known density) were used. The DL model was able to estimate breast density in our test set with an overall median absolute error of 3.6 percentage point, indicating the potential of the proposed approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.