Background: Advances in computer sciences, including novel 3-dimensional rendering techniques, have enabled the creation of cloud-based virtual reality (VR) interfaces, making real-time peer-to-peer interaction possible even from remote locations. This study addresses the potential use of this technology for microsurgery anatomy education. Methods: Digital specimens were created using multiple photogrammetry techniques and imported into a virtual simulated neuroanatomy dissection laboratory. A VR educational program using a multiuser virtual anatomy laboratory experience was developed. Internal validation was performed by five multinational neurosurgery visiting scholars testing and assessing the digital VR models. For external validation, 20 neurosurgery residents tested and assessed the same models and virtual space. Results: Each participant responded to 14 statements assessing the virtual models, categorized under realism (n = 3), usefulness (n = 2), practicality (n = 3), enjoyment (n = 3), and recommendation (n = 3). Most responses expressed agreement or strong agreement with the assessment statements (internal validation, 94% [66/70] total responses; external validation, 91.4% [256/280] total responses). Notably, most participants strongly agreed that this system should be part of neurosurgery residency training and that virtual cadaver courses through this platform could be effective for education. Conclusion: Cloud-based VR interfaces are a novel resource for neurosurgery education. Interactive and remote collaboration between instructors and trainees is possible in virtual environments using volumetric models created with photogrammetry. We believe that this technology could be part of a hybrid anatomy curriculum for neurosurgery education. More studies are needed to assess the educational value of this type of innovative educational resource.
BACKGROUND: Immersive anatomic environments offer an alternative when anatomic laboratory access is limited, but current three-dimensional (3D) renderings are not able to simulate the anatomic detail and surgical perspectives needed for microsurgical education. OBJECTIVE: To perform a proof-of-concept study of a novel photogrammetry 3D reconstruction technique, converting high-definition (monoscopic) microsurgical images into a navigable, interactive, immersive anatomy simulation. METHODS: Images were acquired from cadaveric dissections and from an open-access comprehensive online microsurgical anatomic image database. A pretrained neural network capable of depth estimation from a single image was used to create depth maps (pixelated images containing distance information that could be used for spatial reprojection and 3D rendering). Virtual reality (VR) experience was assessed using a VR headset, and augmented reality was assessed using a quick response code-based application and a tablet camera. RESULTS: Significant correlation was found between processed image depth estimations and neuronavigation-defined coordinates at different levels of magnification. Immersive anatomic models were created from dissection images captured in the authors' laboratory and from images retrieved from the Rhoton Collection. Interactive visualization and magnification allowed multiple perspectives for an enhanced experience in VR. The quick response code offered a convenient method for importing anatomic models into the real world for rehearsal and for comparing other anatomic preparations side by side. CONCLUSION: This proof-of-concept study validated the use of machine learning to render 3D reconstructions from 2-dimensional microsurgical images through depth estimation. This spatial information can be used to develop convenient, realistic, and immersive anatomy image models.
BACKGROUND: Understanding the anatomy of the human cerebrum, cerebellum, and brainstem and their 3-dimensional (3D) relationships is critical for neurosurgery. Although 3D photogrammetric models of cadaver brains and 2-dimensional images of postmortem brain slices are available, neurosurgeons lack free access to 3D models of cross-sectional anatomy of the cerebrum, cerebellum, and brainstem that can be simulated in both augmented reality (AR) and virtual reality (VR). OBJECTIVE:To create 3D models and AR/VR simulations from 2-dimensional images of cross-sectionally dissected cadaveric specimens of the cerebrum, cerebellum, and brainstem. METHODS:The Klingler method was used to prepare 3 cadaveric specimens for dissection in the axial, sagittal, and coronal planes. A series of 3D models and AR/VR simulations were then created using 360°photogrammetry.RESULTS: High-resolution 3D models of cross-sectional anatomy of the cerebrum, cerebellum, and brainstem were obtained and used in creating AR/VR simulations. Eleven axial, 9 sagittal, and 7 coronal 3D models were created. The sections were planned to show important deep anatomic structures. These models can be freely rotated, projected onto any surface, viewed from all angles, and examined at various magnifications.CONCLUSION: To our knowledge, this detailed study is the first to combine up-to-date technologies (photogrammetry, AR, and VR) for high-resolution 3D visualization of the cross-sectional anatomy of the entire human cerebrum, cerebellum, and brainstem. The resulting 3D images are freely available for use by medical professionals and students for better comprehension of the 3D relationship of the deep and superficial brain anatomy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.