A new technique for localizing both visible and occluded structures in an endoscopic view was proposed and tested. This method leverages both preoperative data, as a source of patient-specific prior knowledge, as well as vasculature pulsation and endoscopic visual cues in order to accurately segment the highly noisy and cluttered environment of an endoscopic video. Our results on in vivo clinical cases of partial nephrectomy illustrate the potential of the proposed framework for augmented reality applications in minimally invasive surgeries.
Hilar dissection is an important and delicate stage in partial nephrectomy, during which surgeons remove connective tissue surrounding renal vasculature. Serious complications arise when the occluded blood vessels, concealed by fat, are missed in the endoscopic view and as a result are not appropriately clamped.Such complications may include catastrophic blood loss from internal bleeding and associated occlusion of the surgical view during the excision of the cancerous mass (due to heavy bleeding), both of which may compromise the visibility of surgical margins or even result in a conversion from a minimally invasive to an open intervention. To aid in vessel discovery, we propose a novel automatic method to segment occluded vasculature from labeling minute pulsatile motion that is otherwise imperceptible with the naked eye. Our segmentation technique extracts subtle tissue motions using a technique adapted from phase-based video magnification, in which we measure motion from periodic changes in local phase information albeit for labeling rather than magnification. Based on measuring local phase through spatial decomposition of each frame of the endoscopic video using complex wavelet pairs, our approach assigns segmentation labels by detecting regions exhibiting temporal local phase changes matching the heart rate. We * Corresponding author Email address: alborza@ece.ubc.ca (Alborz Amir-Khalili)
Abstract. In most robot-assisted surgical interventions, multimodal fusion of pre-and intra-operative data is highly valuable, affording the surgeon a more comprehensive understanding of the surgical scene observed through the stereo endoscopic camera. More specifically, in the case of partial nephrectomy, fusing pre-operative segmentations of kidney and tumor with the stereo endoscopic view can guide tumor localization and the identification of resection margins. However, the surgeons are often unable to reliably assess the levels of trust they can bestow on what is overlaid on the screen. In this paper, we present the proofof-concept of an uncertainty-encoded augmented reality framework and novel visualizations of the uncertainties derived from the pre-operative CT segmentation onto the surgeon's stereo endoscopic view. To verify its clinical potential, the proposed method is applied to an ex vivo lamb kidney. The results are contrasted to different visualization solutions based on crisp segmentation demonstrating that our method provides valuable additional information that can help the surgeon during the resection planning.
Abstract-Tumour identification is a critical step in robotassisted partial nephrectomy (RAPN) during which the surgeon determines the tumour localization and resection margins. To help the surgeon in achieving this step, our research work aims at leveraging both pre-and intra-operative imaging modalities (CT, MRI, laparoscopic US, stereo endoscopic video) to provide an augmented reality view of kidney-tumour boundaries with uncertainty-encoded information. We present herein the progress of this research work including segmentation of preoperative scans, biomechanical simulation of deformations, stereo surface reconstruction from stereo endoscopic camera, pre-operative to intra-operative data registration, and augmented reality visualization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.