In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the registration accuracy and robustness in the presence of strong image content mismatch. This capability could offer valuable assistance and decision support in spine level localization in a manner consistent with clinical workflow.
Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.
Spinal screw placement is a challenging task due to small bone corridors and high risk of neurological or vascular complications, benefiting from precision guidance/navigation and quality assurance (QA). Implicit to both guidance and QA is the definition of a surgical plan-i.e. the desired trajectories and device selection for target vertebrae-conventionally requiring time-consuming manual annotations by a skilled surgeon. We propose automation of such planning by deriving the pedicle trajectory and device selection from a patient's preoperative CT or MRI. An atlas of vertebrae surfaces was created to provide the underlying basis for automatic planning-in this work, comprising 40 exemplary vertebrae at three levels of the spine (T7, T8, and L3). The atlas was enriched with ideal trajectory annotations for 60 pedicles in total. To define trajectories for a given patient, sparse deformation fields from the atlas surfaces to the input (CT or MR image) are applied on the annotated trajectories. Mean value coordinates are used to interpolate dense deformation fields. The pose of a straight trajectory is optimized by image-based registration to an accumulated volume of the deformed annotations. For evaluation, input deformation fields were created using coherent point drift (CPD) to perform a leave-one-out analysis over the atlas surfaces. CPD registration demonstrated surface error of 0.89 ± 0.10 mm (median ± interquartile range) for T7/T8 and 1.29 ± 0.15 mm for L3. At the pedicle center, registered trajectories deviated from the expert reference by 0.56 ± 0.63 mm (T7/T8) and 1.12 ± 0.67 mm (L3). The predicted maximum screw diameter differed by 0.45 ± 0.62 mm (T7/T8), and 1.26 ± 1.19 mm (L3). The automated planning method avoided screw collisions in all cases and demonstrated close agreement overall with expert reference plans, offering a potentially valuable tool in support of surgical guidance and QA.
Purpose Intraoperative x-ray radiography/fluoroscopy is commonly used to assess the placement of surgical devices in the operating room (e.g., spine pedicle screws), but qualitative interpretation can fail to reliably detect suboptimal delivery and/or breach of adjacent critical structures. We present a 3D-2D image registration method wherein intraoperative radiographs are leveraged in combination with prior knowledge of the patient and surgical components for quantitative assessment of device placement and more rigorous quality assurance (QA) of the surgical product. Methods The algorithm is based on known-component registration (KC-Reg) in which patient-specific preoperative CT and parametric component models are used. The registration performs optimization of gradient similarity, removes the need for offline geometric calibration of the C-arm, and simultaneously solves for multiple component bodies, thereby allowing QA in a single step (e.g., spinal construct with 4–20 screws). Performance was tested in a spine phantom, and first clinical results are reported for QA of transpedicle screws delivered in a patient undergoing thoracolumbar spine surgery. Results Simultaneous registration of 10 pedicle screws (5 contralateral pairs) demonstrated mean target registration error (TRE) of 1.1 ± 0.1 mm at the screw tip and 0.7 ± 0.4° in angulation when a prior geometric calibration was used. The calibration-free formulation, with the aid of component collision constraints, achieved TRE of 1.4 ± 0.6 mm. In all cases, a statistically significant improvement (p < 0.05) was observed for the simultaneous solutions in comparison to previously reported sequential solution of individual components. Initial application in clinical data in spine surgery demonstrated TRE of 2.7 ± 2.6 mm and 1.5 ± 0.8°. Conclusions The KC-Reg algorithm offers an independent check and quantitative QA of the surgical product using radiographic / fluoroscopic views acquired within standard OR workflow. Such intraoperative assessment could improve quality and safety, provide the opportunity to revise suboptimal constructs in the OR, and reduce the frequency of revision surgery.
Modern cone-beam CT systems, especially C-arms, are capable of diverse source-detector orbits. However, geometric calibration of these systems using conventional configurations of spherical fiducials (BBs) may be challenged for novel source-detector orbits and system geometries. In part, this is because the BB configurations are designed with careful forethought regarding the intended orbit so that BB marker projections do not overlap in projection views. Examples include helical arrangements of BBs (Rougee et al 1993 Proc. SPIE 1897 161-9) such that markers do not overlap in projections acquired from a circular orbit and circular arrangements of BBs (Cho et al 2005 Med. Phys. 32 968-83). As a more general alternative, this work proposes a calibration method based on an array of line-shaped, radio-opaque wire segments. With this method, geometric parameter estimation is accomplished by relating the 3D line equations representing the wires to the 2D line equations of their projections. The use of line fiducials simplifies many challenges with fiducial recognition and extraction in an orbit-independent manner. For example, their projections can overlap only mildly, for any gantry pose, as long as the wires are mutually non-coplanar in 3D. The method was tested in application to circular and non-circular trajectories in simulation and in real orbits executed using a mobile C-arm prototype for cone-beam CT. Results indicated high calibration accuracy, as measured by forward and backprojection/triangulation error metrics. Triangulation errors on the order of microns and backprojected ray deviations uniformly less than 0.2 mm were observed in both real and simulated orbits. Mean forward projection errors less than 0.1 mm were observed in a comprehensive sweep of different C-arm gantry angulations. Finally, successful integration of the method into a CT imaging chain was demonstrated in head phantom scans.
The proposed tracker configuration demonstrated sub- mm TRE from the dynamic reference frame of a rotational C-arm through the use of the multi-face reference marker. Real-time DRRs and video augmentation from a natural perspective over the operating table assisted C-arm setup, simplified radiographic search and localization, and reduced fluoroscopy time. Incorporation of the proposed tracker configuration with C-arm CBCT guidance has the potential to simplify intraoperative registration, improve geometric accuracy, enhance visualization, and reduce radiation exposure.
Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base of tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam CT (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e., volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC), and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid, and Demons steps was 4.6, 2.1, and 1.7 mm, respectively. The respective ECC was 0.57, 0.70, and 0.73 and NPMI was 0.46, 0.57, and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base of tongue robotic surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.