Abstract-The advent of intraoperative real-time image guidance has led to the emergence of new surgical interventional paradigms including image-guided robot assistance. Most often the use of an intraoperative imaging modality is limited to visual perception of the area of procedure. In this work, we propose a framework for performing robot-assisted interventions with real-time Magnetic Resonance Imaging (rtMRI) guidance. The described computational core of this framework, processes onthe-fly rtMRI, integrates the processed information with robot control and renders it on the human-machine interfaces. This information is rendered on a visualization and force-feedback interface for enhanced perception of a dynamic area of procedure and for assisting the operator in the safe and accurate maneuvering of a robotic manipulator. The framework was experimentally tested by applying it to a simulated Transapical Aortic Valve Implantation with a virtual robotic manipulator. rtMRI data was processed on-the-fly in a rolling-window scheme and together with a multi-threaded and multi-hardware implementation, the core delivered appropriate speed of 20Hz for visualization and 1000Hz for force-feedback. The experimental results demonstrate significant improvement in the simulated task by both decreasing the duration of the procedure by half and increasing safety in the presence of cardiac and breathing motion by reducing the duration or incidents the operator collides with the tissue.
Background: Tele-mentoring facilitates the transfer of surgical knowledge. The objective of this work is to develop a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon by transferring information in a form of surgical instruments' motion required during a minimally invasive surgery. Method:A tele-mentoring framework is developed to transfer video stream of the surgical field, poses of the scope and port placement from the operating room to a remote location. From the remote location, the motion of virtual surgical instruments augmented onto the surgical field is sent to the operating room. Results:The proposed framework is suitable to be integrated with laparoscopic as well as robotic surgeries. It takes on average 1.56 s to send information from the operating room to the remote location and 0.089 s for vice versa over a local area network. Conclusions:The work demonstrates a tele-mentoring framework that enables a specialist surgeon to mentor an operating surgeon during a minimally invasive surgery. K E Y W O R D Saugmented reality, minimally invasive surgeries, tele-mentoring, telemedicine | INTRODUCTIONAs surgery has evolved from open to minimally invasive, the framework of tele-mentoring technologies has largely remained the same. [1][2][3] It still involves basic exchange of audio and annotated video messages, and lacks augmentation of information pertaining to surgical tool motion and tool-tissue interaction. 4,5 In an operating room setup of minimally invasive surgery (MIS), the surgeon operates on a patient using surgical instruments inserted through small incisions. These surgical instruments can either be manually operated (such as laparoscopic instruments) or robotically actuated.Along with instruments, a scope (camera) is also inserted inside the patient's body to visualise the interaction of surgical instruments' tooltips with the tissue. In the case of manual MIS, the surgeon This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
Abstract-Advances in computational methods and hardware platforms provide efficient processing of medical imaging data sets for surgical planning. In the case of neurosurgical interventions that are performed via a straight access path, planning entails selecting a pathway, from the scalp surface to the targeted area, that is of minimal risk to the patient. We propose a GPU-accelerated approach to enable quantitative estimation of the risk associated with a particular access path at interactive rates. It heavily exploits spatially accelerated data structures and efficient implementation of algorithms on GPUs. We evaluate the computational efficiency and scalability of the proposed approach through extensive performance comparisons, and show that interactive rates can be achieved even for high-resolution meshes. Through a user study, and feedback obtained from domain experts, we identify some of the potential benefits that our high-speed approach offers for pre-operative planning and intra-operative replanning of straight access neurosurgical interventions.
Background: User interfaces play a vital role in the planning and execution of an interventional procedure. The objective of this study is to investigate the effect of using different user interfaces for planning transrectal robot-assisted MR-guided prostate biopsy (MRgPBx) in an augmented reality (AR) environment. Method:End-user studies were conducted by simulating an MRgPBx system with end-and side-firing modes. The information from the system to the operator was rendered on HoloLens as an output interface. Joystick, mouse/keyboard, and holographic menus were used as input interfaces to the system. Results:The studies indicated that using a joystick improved the interactive capacity and enabled operator to plan MRgPBx in less time. It efficiently captures the operator's commands to manipulate the augmented environment representing the state of MRgPBx system. Conclusions:The study demonstrates an alternative to conventional input interfaces to interact and manipulate an AR environment within the context of MRgPBx planning.
Real-time image-guided cardiac procedures (manual or robotassisted) are emerging due to potential improvement in patient management and reduction in the overall cost. These minimally invasive procedures require both real-time visualization and guidance for maneuvering an interventional tool safely inside the dynamic environment of a heart. In this work, we propose an approach to generate dynamic 4D access corridors from the apex to the aortic annulus for performing real-time MRI guided transapical valvuloplasties. Ultrafast MR images (collected every 49.3 ms) are processed on-the-fly using projections to extract a conservative dynamic trace in form of a three-dimensional access corridor. Our experimental results show that the reconstructed corridors can be refreshed with a delay of less than 0.5ms to reflect the changes inside the left ventricle caused by breathing motion and the heartbeat.
Background Tele-mentoring during surgery facilitates the transfer of surgical knowledge from a mentor (specialist surgeon) to a mentee (operating surgeon). The aim of this work is to develop a tele-mentoring system tailored for minimally invasive surgery (MIS) where the mentor can remotely demonstrate to the mentee the required motion of the surgical instruments. Methods A remote tele-mentoring system is implemented that generates visual cues in the form of virtual surgical instrument motion overlaid onto the live view of the operative field. The technical performance of the system is evaluated in a simulated environment, where the operating room and the central location of the mentor were physically located in different countries and connected over the internet. In addition, a user study was performed to assess the system as a mentoring tool. Results On average, it took 260 ms to send a view of the operative field of 1920 × 1080 resolution from the operating room to the central location of the mentor and an average of 132 ms to receive the motion of virtual surgical instruments from the central location to the operating room. The user study showed that it is feasible for the mentor to demonstrate and for the mentee to understand and replicate the motion of surgical instruments. Conclusion The work demonstrates the feasibility of transferring information over the internet from a mentor to a mentee in the form of virtual surgical instruments. Their motion is overlaid onto the live view of the operative field enabling real-time interactions between both the surgeons.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.