Planning surgical interventions is a complex task, demanding a high degree of perceptual, cognitive, and sensorimotor skills to reduce intra- and post-operative complications. This process requires spatial reasoning to coordinate between the preoperatively acquired medical images and patient reference frames. In the case of neurosurgical interventions, traditional approaches to planning tend to focus on providing a means for visualizing medical images, but rarely support transformation between different spatial reference frames. Thus, surgeons often rely on their previous experience and intuition as their sole guide is to perform mental transformation. In case of junior residents, this may lead to longer operation times or increased chance of error under additional cognitive demands. In this paper, we introduce a mixed augmented-/virtual-reality system to facilitate training for planning a common neurosurgical procedure, brain tumour resection. The proposed system is designed and evaluated with human factors explicitly in mind, alleviating the difficulty of mental transformation. Our results indicate that, compared to conventional planning environments, the proposed system greatly improves the nonclinicians' performance, independent of the sensorimotor tasks performed ( ). Furthermore, the use of the proposed system by clinicians resulted in a significant reduction in time to perform clinically relevant tasks ( ). These results demonstrate the role of mixed-reality systems in assisting residents to develop necessary spatial reasoning skills needed for planning brain tumour resection, improving patient outcomes.
We have introduced the first TRE-guided ultrasound calibration framework. Using a hollow straw as an oriented line phantom, we virtually constructed a rigid lines phantom and modeled the calibration process as a point-to-line registration. Highly accurate calibration was achieved with minimal measurements by using a spatial stiffness model of TRE to strategically choose the pose of the calibration phantom between successive measurements.
Abstract. Registration of intraoperative ultrasound (US) with preoperative computed tomography (CT) data for interventional guidance is a subject of immense interest, particularly for percutaneous spinal injections. We propose a biomechanically constrained group-wise registration of US to CT images of the lumbar spine. Each vertebra in CT is treated as a sub-volume and transformed individually. The sub-volumes are then reconstructed into a single volume. The algorithm simulates an US image from the CT data at each iteration of the registration. This simulated US image is used to calculate an intensity based similarity metric with the real US image. A biomechanical model is used to constrain the displacement of the vertebrae relative to one another. Covariance Matrix Adaption -Evolution Strategy (CMA-ES) is utilized as the optimization strategy. Validation is performed on CT and US images from a phantom designed to preserve realistic curvatures of the spine. The technique is able to register initial misalignments of up to 20mm with a success rate of 82%, and those of up to 10mm with a success rate of 98.6%.
Computer-assisted training systems promote both training efficacy and patient health. An important component for providing automatic feedback in computer-assisted training systems is workflow segmentation: the determination of what task in the workflow is being performed. Our objective was to develop a workflow segmentation algorithm for needle interventions using needle tracking data. Needle tracking data were collected from ultrasound-guided epidural injections and lumbar punctures, performed by medical personnel. The workflow segmentation algorithm was tested in a simulated real-time scenario: the algorithm was only allowed access to data recorded at, or prior to, the time being segmented. Segmentation output was compared to the ground-truth segmentations produced by independent blinded observers. Overall, the algorithm was 93% accurate. It automatically segmented the ultrasound-guided epidural procedures with 81% accuracy and the lumbar punctures with 82% accuracy. Given that the manual segmentation consistency was only 84%, the algorithm's accuracy was 93%. Using Cohen's d statistic, a medium effect size (0.5) was calculated. Because the algorithm segments needle-based procedures with such high accuracy, expert observers can be augmented by this algorithm without a large decrease in ability to follow trainees in a workflow. The proposed algorithm is feasible for use in a computer-assisted needle placement training system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.