Purpose: X-ray scattering leads to CT images with a reduced contrast, inaccurate CT values as well as streak and cupping artifacts. Therefore, scatter correction is crucial to maintain the diagnostic value of CT and CBCT examinations. However, existing approaches are not able to combine both high accuracy and high computational performance. Therefore, we propose the deep scatter estimation (DSE): a deep convolutional neural network to derive highly accurate scatter estimates in real time. Methods: Gold standard scatter estimation approaches rely on dedicated Monte Carlo (MC) photon transport codes. However, being computationally expensive, MC methods cannot be used routinely. To enable real-time scatter correction with similar accuracy, DSE uses a deep convolutional neural network that is trained to predict MC scatter estimates based on the acquired projection data. Here, the potential of DSE is demonstrated using simulations of CBCT head, thorax, and abdomen scans as well as measurements at an experimental table-top CBCT. Two conventional computationally efficient scatter estimation approaches were implemented as reference: a kernel-based scatter estimation (KSE) and the hybrid scatter estimation (HSE). Results: The simulation study demonstrates that DSE generalizes well to varying tube voltages, varying noise levels as well as varying anatomical regions as long as they are appropriately represented within the training data. In any case the deviation of the scatter estimates from the ground truth MC scatter distribution is less than 1.8% while it is between 6.2% and 293.3% for HSE and between 11.2% and 20.5% for KSE. To evaluate the performance for real data, measurements of an anthropomorphic head phantom were performed. Errors were quantified by a comparison to a slit scan reconstruction. Here, the deviation is 278 HU (no correction), 123 HU (KSE), 65 HU (HSE), and 6 HU (DSE), respectively. Conclusions: The DSE clearly outperforms conventional scatter estimation approaches in terms of accuracy. DSE is nearly as accurate as Monte Carlo simulations but is superior in terms of speed (%10 ms/projection) by orders of magnitude.
During a typical cardiac short scan, the heart can move several millimeters. As a result, the corresponding CT reconstructions may be corrupted by motion artifacts. Especially the assessment of small structures, such as the coronary arteries, is potentially impaired by the presence of these artifacts. In order to estimate and compensate for coronary artery motion, this manuscript proposes the deep partial angle-based motion compensation (Deep PAMoCo). Methods: The basic principle of the Deep PAMoCo relies on the concept of partial angle reconstructions (PARs), that is, it divides the short scan data into several consecutive angular segments and reconstructs them separately. Subsequently, the PARs are deformed according to a motion vector field (MVF) such that they represent the same motion state and summed up to obtain the final motioncompensated reconstruction. However, in contrast to prior work that is based on the same principle, the Deep PAMoCo estimates and applies the MVF via a deep neural network to increase the computational performance as well as the quality of the motion compensated reconstructions. Results: Using simulated data, it could be demonstrated that the Deep PAMoCo is able to remove almost all motion artifacts independent of the contrast, the radius and the motion amplitude of the coronary artery. In any case, the average error of the CT values along the coronary artery is about 25 HU while errors of up to 300 HU can be observed if no correction is applied. Similar results were obtained for clinical cardiac CT scans where the Deep PAMoCo clearly outperforms state-of-the-art coronary artery motion compensation approaches in terms of processing time as well as accuracy. Conclusions: The Deep PAMoCo provides an efficient approach to increase the diagnostic value of cardiac CT scans even if they are highly corrupted by motion.
Image guidance for minimally invasive interventions is usually performed by acquiring fluoroscopic images using a monoplanar or a biplanar C-arm system. However, the projective data provide only limited information about the spatial structure and position of interventional tools and devices such as stents, guide wires or coils. In this work we propose a deep learning-based pipeline for real-time tomographic (fourdimensional) interventional guidance at conventional dose levels.Methods: Our pipeline is comprised of two steps. In the first one, interventional tools are extracted from four cone-beam CT projections using a deep convolutional neural network. These projections are then Feldkamp reconstructed and fed into a second network, which is trained to segment the interventional tools and devices in this highly undersampled reconstruction. Both networks are trained using simulated CT data and evaluated on both simulated data and C-arm cone-beam CT measurements of stents, coils and guide wires Results: The pipeline is capable of reconstructing interventional tools from only four x-ray projections without the need for a patient prior. At an isotropic voxel size of 100 µm our methods achieves a precision/recall within a 100 µm environment of the ground truth of 93 %/98 %, 90 %/71 %, and 93 %/76 % for guide wires, stents and coils, respectively.Conclusions: A deep learning-based approach for four-dimensional interventional guidance is able to overcome the drawbacks of today's interventional guidance by providing full spatiotemporal (4D) information about the interventional tools at dose levels comparable to conventional fluoroscopy. i This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process, which may lead to differences between this version and the Version of Record. Please cite this article as
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.