A deep learning technique to enhance 3D images of the complex-valued permittivity of the breast obtained via microwave imaging is investigated. The developed technique is an extension of one created to enhance 2D images. We employ a 3D Convolutional Neural Network, based on the U-Net architecture, that takes in 3D images obtained using the Contrast-Source Inversion (CSI) method and attempts to produce the true 3D image of the permittivity. The training set consists of 3D CSI images, along with the true numerical phantom images from which the microwave scattered field utilized to create the CSI reconstructions was synthetically generated. Each numerical phantom varies with respect to the size, number, and location of tumors within the fibroglandular region. The reconstructed permittivity images produced by the proposed 3D U-Net show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but it also enhances the detectability of the tumors. We test the trained U-Net with 3D images obtained from experimentally collected microwave data as well as with images obtained synthetically. Significantly, the results illustrate that although the network was trained using only images obtained from synthetic data, it performed well with images obtained from both synthetic and experimental data. Quantitative evaluations are reported using Receiver Operating Characteristics (ROC) curves for the tumor detectability and RMS error for the enhancement of the reconstructions.
We present a deep learning method used in conjunction with dual-modal microwave-ultrasound imaging to produce tomographic reconstructions of the complex-valued permittivity of numerical breast phantoms. We also assess tumor segmentation performance using the reconstructed permittivity as a feature. The contrast source inversion (CSI) technique is used to create the complex-permittivity images of the breast with ultrasound-derived tissue regions utilized as prior information. However, imaging artifacts make the detection of tumors difficult. To overcome this issue we train a convolutional neural network (CNN) that takes in, as input, the dual-modal CSI reconstruction and attempts to produce the true image of the complex tissue permittivity. The neural network consists of successive convolutional and downsampling layers, followed by successive deconvolutional and upsampling layers based on the U-Net architecture. To train the neural network, the input-output pairs consist of CSI’s dual-modal reconstructions, along with the true numerical phantom images from which the microwave scattered field was synthetically generated. The reconstructed permittivity images produced by the CNN show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but can also improve the detectability of tumors. The performance of the CNN is assessed using a four-fold cross-validation on our dataset that shows improvement over CSI both in terms of reconstruction error and tumor segmentation performance.
A deep learning approach is proposed for performing tissue-type classification of tomographic microwave and ultrasound property images of the breast. The approach is based on a convolutional neural network (CNN) utilizing the U-net architecture that also quantifies the uncertainty in the classification of each pixel. Quantitative tomographic reconstructions of dielectric properties (complex-valued permittivity), ultrasonic properties (compressibility and attenuation), as well as their combination, with the corresponding actual tissue-type classification constitute the training set. The CNN learns to map the quantitative property reconstructions to a single tissue-type image. The level of confidence in predicting a tissue-type at each pixel is determined. This uncertainty quantification is diagnostically critical for biomedical applications, especially when attempting to distinguish between cancerous and healthy tissues. The Gauss-Newton Inversion algorithm is used for the quantitative reconstruction of both dielectric and ultrasonic properties. Electromagnetic and ultrasound scattered-field data is obtained from MRI-derived numerical breast phantoms. Several numerical breast phantoms types, from fatty to dense, are considered. The proposed classification and uncertainty quantification approach is shown to outperform a previously studied tissue-type classification method based on a Bayesian approach.
A two-stage workflow for detecting and monitoring tumors in the human breast with an inverse scattering-based technique is presented. Stage 1 involves a phaseless bulk-parameter inference neural network that recovers the geometry and permittivity of the breast fibroglandular region. The bulk parameters are used for calibration and as prior information for Stage 2, a full phase contrast source inversion of the measurement data, to detect regions of high relative complex-valued permittivity in the breast based on an assumed known overall tissue geometry. We demonstrate the ability of the workflow to recover the geometry and bulk permittivity of the different sized fibroglandular regions, and to detect and localize tumors of various sizes and locations within the breast model. Preliminary results show promise for a synthetically trained Stage 1 network to be applied to experimental data and provide quality prior information in practical imaging situations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.