Ultrasound computed tomography is an inexpensive and radiation-free medical imaging technique used to quantify the tissue acoustic properties for advanced clinical diagnosis. Image reconstruction in ultrasound tomography is often modeled as an optimization scheme solved by iterative methods like full-waveform inversion. These iterative methods are computationally expensive, while the optimization problem is ill-posed and non-linear. To address this problem, we propose to use deep learning to overcome the computational burden and ill-posedness, and achieve near real-time image reconstruction in ultrasound tomography. We aim to directly learn the mapping from the recorded time-series sensor data to a spatial image of acoustical properties. To accomplish this, we develop a deep learning model using two cascaded convolutional neural networks with an encoder-decoder architecture. We achieve a good representation by first extracting the intermediate mapping-knowledge and later utilizing this knowledge to reconstruct the image. This approach is evaluated on synthetic phantoms where simulated ultrasound data are acquired from a ring of transducers surrounding the region of interest. The measurement data is acquired by forward modeling the wave equation using the k-wave toolbox. Our simulation results demonstrate that our proposed deep-learning method is robust to noise and significantly outperforms the state-of-the-art traditional iterative method both quantitatively and qualitatively. Furthermore, our model takes substantially less computational time than the conventional full-wave inversion method.
The potential of ultrasound tomography (UT) has been noticed to quantify the tissue acoustic properties for advanced clinical diagnosis. Conventionally, UT is a full-wave inversion (FWI) problem which is addressed through iterative methods formulated as an optimization problem. The existing iterative FWI methods are ill-posed and have poor non-linear mapping leading to low-resolution UT images with artifacts while also being computationally expensive. Recently, deep learning networks have proven their capabilities in solving many complex problems. We propose to incorporate the novelty of deep learning in UT, i.e., leveraging its potential to overcome ill-posedness and learn the direct representation mapping from the time-series sensor data to the spatial acoustical image of the region of interest. We built a deep learning model using an encoder-decoder architecture with a convolutional neural network (CNN). But CNNs are known to introduce artifacts so we employ a locally connected conditional random field (CRF) on top of the CNNs to enhance the UT image. The proposed CRF-CNN shows the feasibility of performing UT directly with deep learning with promising results with 9% more accuracy while the computational time is reduced to 3 min on synthetic ultrasound data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.