A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Purpose: A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for taskspecific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. Methods: Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic and lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and softtissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. Results: Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) -each at (0.3 Â 0.3 Â 0.9 mm 3 ) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3 Â 0.3 Â 1.5 mm 3 ) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution ($2 Â higher at the entrance side than at isocenter, and $3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from $21 lSv=mGy on average at tableside to $0.1 lSv=mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting $zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was $11.5 mGy for CBCT-guided thoracic vertebroplasty and $23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to $1 min of fluoroscopy time ($40-60 lSv), compared to 5-11 min for the conventional approach. Conclusions: Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific pr...
The trend toward minimally invasive surgical interventions has created new challenges for visualization during surgical procedures. However, at the same time, the introduction of high-definition digital endoscopy offers the opportunity to apply methods from computer vision to provide visualization enhancements such as anatomic reconstruction, surface registration, motion tracking, and augmented reality. This review provides a perspective on this rapidly evolving field. It first introduces the clinical and technical background necessary for developing vision-based algorithms for interventional applications. It then discusses several examples of clinical interventions where computer vision can be applied, including bronchoscopy, rhinoscopy, transnasal skull-base neurosurgery, upper airway interventions, laparoscopy, robotic-assisted surgery, and Natural Orifice Transluminal Endoscopic Surgery (NOTES). It concludes that the currently reported work is only the beginning. As the demand for minimally invasive procedures rises, computer vision in surgery will continue to advance through close interdisciplinary work between interventionists and engineers.
Adaptation of the Demons deformable registration process to include segmentation (i.e., identification of excised tissue) and an extra dimension in the deformation field provided a means to accurately accommodate missing tissue between image acquisitions. The extra-dimensional approach yielded accurate "ejection" of voxels local to the excision site while preserving the registration accuracy (typically subvoxel) of the conventional Demons approach throughout the rest of the image. The ability to accommodate missing tissue volumes is important to application of CBCT for surgical guidance (e.g., skull base drillout) and may have application in other areas of CBCT guidance.
Abstract-Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce three-dimensional (3D) points, and then registers the reconstructed point cloud to a surface segmented from pre-operative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the two-dimensional (2D)-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves sub-millimeter (0.70 mm mean) target registration error (TRE) results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.