This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers’ mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work was to fuse images obtained from a tracked endoscope to surfaces derived from three-dimensional (3-D) preoperative magnetic resonance or computed tomography (CT) data, for assistance in surgical planning, training and guidance. We extracted polygonal surfaces from preoperative CT images of a standard brain phantom and digitized endoscopic video images from a tracked neuro-endoscope. The optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom (physical space) and CT images (preoperative image space) was accomplished using fiducial markers that could be identified both on the phantom and within the images. The endoscopic images were corrected for radial lens distortion and then mapped onto the extracted surfaces via a two-dimensional 2-D to 3-D mapping algorithm. The optical tracker has an accuracy of about 0.3 mm at its centroid, which allows the endoscope tip to be localized to within 1.0 mm. The mapping operation allows multiple endoscopic images to be "painted" onto the 3-D brain surfaces, as they are acquired, in the correct anatomical position. This allows panoramic and stereoscopic visualization, as well as navigation of the 3-D surface, painted with multiple endoscopic views, from arbitrary perspectives.
Abstract. Performing a craniotomy will cause brain tissue to shift. As a result of the craniotomy, the accuracy of stereotactic localization techniques is reduced unless the brain shift can be accurately measured. If an ultrasound probe is tracked by a 3D optical tracking system, intraoperative ultrasound images acquired through the craniotomy can be compared to pre-operative MRI images to quantify the shift. We have developed 2D and 3D image overlay tools which allow interactive, realtime visualization of the shift as well as software that uses homologous landmarks between the ultrasound and MRI image volumes to create a thin-plate-spline warp transformation that provides a mapping between pre-operative imaging coordinates and the shifted intra-operative coordinages. Our techniques have been demonstrated on poly vinyl alcohol cryogel phantoms which exhibit mechanical and imaging properties similar to those of the human brain.
The most attractive feature of 2D B-mode ultrasound for intra-operative use is that it is both a real time and a highly interactive modality. Most 3D freehand reconstruction methods, however, are not fully interactive because they do not allow the display of any part of the 3D ultrasound image until all data collection and reconstruction is finished. We describe a technique whereby the 3D reconstruction occurs in real-time as the data is acquired, and where the operator can view the progress of the reconstruction on three orthogonal slice views through the ultrasound volume. Capture of the ultrasound data can be immediately followed by a straightforward, interactive nonlinear registration of a pre-operative MRI volume to match the intra-operative ultrasound. We demonstrate the our system on a deformable, multi-modal PVA-cryogel phantom and during a clinical surgery.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.