Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons’ needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon’s need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.
Optical coherence tomography (OCT) is a medical imaging modality that is commonly used to diagnose retinal diseases. In recent years, linear and radial scanning patterns have been proposed to acquire three-dimensional OCT data. These patterns show differences in A-scan acquisition density across the generated volumes, and thus differ in their suitability for the diagnosis of retinal diseases. While radial OCT volumes exhibit a higher A-scan sampling rate around the scan center, linear scans contain more information in the peripheral scan areas. In this paper, we propose a method to combine a linearly and radially acquired OCT volume to generate a single compound volume, which merges the advantages of both scanning patterns to increase the information that can be gained from the three-dimensional OCT data. We initially generate 3D point clouds of the linearly and radially acquired OCT volumes and use an Iterative Closest Point (ICP) variant to register both volumes. After registration, the compound volume is created by selectively exploiting linear and radial scanning data, depending on the A-scan density of the individual scans. Fusing regions from both volumes with respect to their local A-scan sampling density, we achieve improved overall anatomical OCT information in a high-resolution compound volume. We demonstrate our method on linear and radial OCT volumes for the visualization and analysis of macular holes and the surrounding anatomical structures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.