4D cone beam computed tomography (CBCT) images of the thorax and abdomen can have reduced quality due to the limited number of projections per respiratory bin used in gated image reconstruction. In this work, we present a new algorithm to reconstruct high quality CBCT images by simultaneously reconstructing images and generating an associated respiratory motion model. This is done by updating model parameters to compensate for motion during the iterative image reconstruction process. CBCT image acquisition was simulated using the digital eXternal CArdiac Torso (XCAT) phantom, simulating breathing motion using four patient breathing traces. 4DCBCT images were reconstructed using the simultaneous algebraic reconstruction technique (SART), and compared to the proposed motion-compensated SART (McSART) algorithm. McSART used a motion model that describes tissue position as a function of diaphragm amplitude and velocity. The McSART algorithm alternately updated the motion model and image reconstruction, increasing the number of projections used for image reconstruction with every iteration. The model was able to interpolate and extrapolate deformations according to the magnitude of the surrogate signal. Without noise, the final iteration McSART images had HU errors at 31%, 34%, and 44% of their SART-reconstructed counterparts compared to ground truth XCAT images, with corresponding root-mean-square (RMS) motion model errors of 0.75 mm, 1.08 mm, and 1.17 mm respectively. With added image noise, McSART’s HU error was 31% of the SART-reconstructed 4DCBCT error, with a 1.43 mm RMS motion model error. Qualitatively, blurring and streaking artifacts were reduced in all the reconstructed images compared to 3D or SART-reconstructed 4DCBCT. The output of the algorithm was a high quality reference image and a corresponding motion model, that could be used to deform the reference image to any other point in a breathing cycle.
Purpose To develop and evaluate a method of reconstructing a patient‐ and treatment day‐ specific volumetric image and motion model from free‐breathing cone‐beam projections and respiratory surrogate measurements. This Motion‐Compensated Simultaneous Algebraic Reconstruction Technique (MC‐SART) generates and uses a motion model derived directly from the cone‐beam projections, without requiring prior motion measurements from 4DCT, and can compensate for both inter‐ and intrabin deformations. The motion model can be used to generate images at arbitrary breathing points, which can be used for estimating volumetric images during treatment delivery. Methods The MC‐SART was formulated using simultaneous image reconstruction and motion model estimation. For image reconstruction, projections were first binned according to external surrogate measurements. Projections in each bin were used to reconstruct a set of volumetric images using MC‐SART. The motion model was estimated based on deformable image registration between the reconstructed bins, and least squares fitting to model parameters. The model was used to compensate for motion in both projection and backprojection operations in the subsequent image reconstruction iterations. These updated images were then used to update the motion model, and the two steps were alternated between. The final output is a volumetric reference image and a motion model that can be used to generate images at any other time point from surrogate measurements. Results A retrospective patient dataset consisting of eight lung cancer patients was used to evaluate the method. The absolute intensity differences in the lung regions compared to ground truth were 50.8 ± 43.9 HU in peak exhale phases (reference) and 80.8 ± 74.0 in peak inhale phases (generated). The 50th percentile of voxel registration error of all voxels in the lung regions with >5 mm amplitude was 1.3 mm. The MC‐SART was also applied to measured patient cone‐beam projections acquired with a linac‐mounted CBCT system. Results from this patient data demonstrate the feasibility of MC‐SART and showed qualitative image quality improvements compared to other state‐of‐the‐art algorithms. Conclusion We have developed a simultaneous image reconstruction and motion model estimation method that uses Cone‐beam computed tomography (CBCT) projections and respiratory surrogate measurements to reconstruct a high‐quality reference image and motion model of a patient in treatment position. The method provided superior performance in both HU accuracy and positional accuracy compared to other existing methods. The resultant reference image and motion model can be combined with respiratory surrogate measurements to generate volumetric images representing patient anatomy at arbitrary time points.
To objectively compare the suitability of MRI pulse sequences and commercially available fiducial markers (FMs) for MRI-only prostate radiotherapy simulation. Most FMs appear as small signal voids in MRI images making them difficult to differentiate from tissue heterogeneities such as calcifications. In this study we use quantitative metrics to objectively evaluate the visibility of FMs in 27 patients and an anthropomorphic phantom with a variety of standard clinical MRI pulse sequences and commercially available FMs. FM visibility was quantified using the local contrast-to-noise-ratio (lCNR), the difference between the 80th and 20th percentile iso-intensity FM volumes (Vfall) and the largest iso-intensity volume that can be distinguished from background: apparent-marker-volume (AMV). A larger lCNR and AMV, and smaller Vfall represents a more easily identifiable FM. The number of non-marker objects visualized by each pulse sequence was calculated using FM-derived template-matching. The FM-based target-registration-error (TRE) between each MRI and the planning-CT image was calculated. Fiducial marker visibility was rated by two medical physicists with over three years of experience examining MRI-only prostate simulation images. The rater’s classification accuracy was quantified using the F1 score, which is the harmonic mean of the rater’s precision and recall. These quantitative metrics and human observer ratings were used to evaluate FM identifiability in images from nine subtypes of T1-weighted, T2-weighted and gradient echo (GRE) pulse sequences in a 27-patient study. A phantom study was conducted to quantify the visibility of 8 commercially available FMs. In the patient study, the largest mean lCNR and AMV and, smallest normalized Vfall were produced by the 3.0 T multiple-echo GRE pulse sequence (T1-VIBE, 2° flip angle, 1.23 ms and 2.45 ms echo-times). This pulse sequence produced no false marker detections and TREs less than 2 mm in the left–right, anterior–posterior and cranial–caudal directions, respectively. Human observers rated the 1.23 ms echo-time GRE images with the best average marker visibility score of 100% and an F1 score of 1. In the phantom study, the Gold-Anchor GA-200X-20-B (deployed in a folded configuration) produced the largest sequence averaged lCNR and AMV measurements at 16.1 and 16.7 mm3, respectively. Using quantitative visibility and distinguishability metrics and human observer ratings, the patient study demonstrated that multiple-echo GRE images produced the best gold FM visibility and distinguishability. The phantom study demonstrated that markers manufactured from platinum or iron-doped gold quantitatively produced superior visibility compared to their pure gold counterparts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.