A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called "workphases" that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6 mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified.
Abstract. Minimally invasive laparoscopic surgery is widely used for the treatment of cancer and other diseases. During the procedure, gas insufflation is used to create space for laparoscopic tools and operation. Insufflation causes the organs and abdominal wall to deform significantly. Due to this large deformation, the benefit of surgical plans, which are typically based on pre-operative images, is limited for real time navigation. In some recent work, intra-operative images, such as cone-beam CT or interventional CT, are introduced to provide updated volumetric information after insufflation. Other works in this area have focused on simulation of gas insufflation and exploited only the pre-operative images to estimate deformation. This paper proposes a novel registration method for pre-and intra-operative 3D image fusion for laparoscopic surgery. In this approach, the deformation of pre-operative images is driven by a biomechanical model of the insufflation process. The proposed method was validated by five synthetic data sets generated from clinical images and three pairs of in vivo CT scans acquired from two pigs, before and after insufflation. The results show the proposed method achieved high accuracy for both the synthetic and real insufflation data.
Magnetic Resonance Imaging (MRI) has potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. The strong magnetic field prevents the use of conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot performs needle insertion under real-time 3T MR image guidance; workspace requirements, MR compatibility, and workflow have been evaluated on phantoms. The paper explains the robot mechanism and controller design and presents results of preliminary evaluation of the system.
Surgical robots are an important component for delivering advanced paradigm shifting technology such as image guided surgery and navigation. However, for robotic systems to be readily adopted into the operating room they must be easy and convenient to control and facilitate a smooth surgical workflow. In minimally invasive surgery, the laparoscope may be held by a robot but controlling and moving the laparoscope remains challenging. It is disruptive to the workflow for the surgeon to put down the tools to move the robot in particular for solo surgery approaches. This paper proposes a novel approach for naturally controlling the robot mounted laparoscope's position by detecting a surgical grasping tool and recognizing if its state is open or close. This approach does not require markers or fiducials and uses a machine learning framework for tool and state recognition which exploits naturally occurring visual cues. Furthermore a virtual user interface on the laparoscopic image is proposed that uses the surgical tool as a pointing device to overcome common problems in depth perception. Instrument detection and state recognition are evaluated on in-vivo and ex-vivo porcine datasets. To demonstrate the practical surgical application and real time performance the system is validated in a simulated surgical environment.
Abstract. Magnetically-guided capsule endoscopy (MGCE) was introduced in 2010 as a procedure where a capsule in the stomach is navigated via an external magnetic field. The quality of the examination depends on the operator's ability to detect aspects of interest in real time. We present a novel two step computer-assisted diagnostic-procedure (CADP) algorithm for indicating gastritis and gastrointestinal bleedings in the stomach during the examination. First, we identify and exclude subregions of bubbles which can interfere with further processing. Then we address the challenge of lesion localization in an environment with changing contrast and lighting conditions. After a contrast-normalized filtering, feature extraction is performed. The proposed algorithm was tested on 300 images of different patients with uniformly distributed occurrences of the target pathologies. We correctly segmented 84.72% of bubble areas. A mean detection rate of 86% for the target pathologies was achieved during a 5-fold leave-one-out cross-validation.
2D/3D image registration to align a 3D volume and 2D X-ray images is a challenging problem due to its ill-posed nature and various artifacts presented in 2D X-ray images. In this paper, we propose a multi-agent system with an auto attention mechanism for robust and efficient 2D/3D image registration. Specifically, an individual agent is trained with dilated Fully Convolutional Network (FCN) to perform registration in a Markov Decision Process (MDP) by observing a local region, and the final action is then taken based on the proposals from multiple agents and weighted by their corresponding confidence levels. The contributions of this paper are threefold. First, we formulate 2D/3D registration as a MDP with observations, actions, and rewards properly defined with respect to X-ray imaging systems. Second, to handle various artifacts in 2D X-ray images, multiple local agents are employed efficiently via FCN-based structures, and an auto attention mechanism is proposed to favor the proposals from regions with more reliable visual cues. Third, a dilated FCN-based training mechanism is proposed to significantly reduce the Degree of Freedom in the simulation of registration environment, and drastically improve training efficiency by an order of magnitude compared to standard CNN-based training method. We demonstrate that the proposed method achieves high robustness on both spine cone beam Computed Tomography data with a low signal-to-noise ratio and data from minimally invasive spine surgery where severe image artifacts and occlusions are presented due to metal screws and guide wires, outperforming other state-of-the-art methods (single agent-based and optimization-based) by a large margin.
BackgroundDiagnosis of intestinal metaplasia and dysplasia via conventional endoscopy is characterized by low interobserver agreement and poor correlation with histopathologic findings. Chromoendoscopy significantly enhances the visibility of mucosa irregularities, like metaplasia and dysplasia mucosa. Magnetically guided capsule endoscopy (MGCE) offers an alternative technology for upper GI examination. We expect the difficulties of diagnosis of neoplasm in conventional endoscopy to transfer to MGCE. Thus, we aim to chart a path for the application of chromoendoscopy on MGCE via an ex-vivo animal study.MethodsWe propose a modified preparation protocol which adds a staining step to the existing MGCE preparation protocol. An optimal staining concentration is quantitatively determined for different stain types and pathologies. To that end 190 pig stomach tissue samples with and without lesion imitations were stained with different dye concentrations. Quantitative visual criteria are introduced to measure the quality of the staining with respect to mucosa and lesion visibility. Thusly determined optimal concentrations are tested in an ex-vivo pig stomach experiment under magnetic guidance of an endoscopic capsule with the modified protocol.ResultsWe found that the proposed protocol modification does not impact the visibility in the stomach or steerability of the endoscopy capsule. An average optimal staining concentration for the proposed protocol was found at 0.4% for Methylene blue and Indigo carmine. The lesion visibility is improved using the previously obtained optimal dye concentration.ConclusionsWe conclude that chromoendoscopy may be applied in MGCE and improves mucosa and lesion visibility. Systematic evaluation provides important information on appropriate staining concentration. However, further animal and human in-vivo studies are necessary.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers