Purpose In robotic-assisted partial nephrectomy (RAPN), the use of intraoperative ultrasound (IOUS) helps to localise and outline the tumours as well as the blood vessels within the kidney. The aim of this work is to evaluate the use of the pneumatically attachable flexible (PAF) rail system for US 3D reconstruction of malignant masses in RAPN. The PAF rail system is a novel device developed and previously presented by the authors to enable track-guided US scanning. Methods We present a comparison study between US 3D reconstruction of masses based on: the da Vinci Surgical System kinematics, single-and stereo-camera tracking of visual markers embedded on the probe. An US-realistic kidney phantom embedding a mass is used for testing. A new design for the US probe attachment to enhance the performance of the kinematic approach is presented. A feature extraction algorithm is proposed to detect the margins of the targeted mass in US images. Results To evaluate the performance of the investigated approaches the resulting 3D reconstructions have been compared to a CT scan of the phantom. The data collected indicates that single camera reconstruction outperformed the other approaches, reconstructing with a sub-millimetre accuracy the targeted mass. Conclusions This work demonstrates that the PAF rail system provides a reliable platform to enable accurate US 3D reconstruction of masses in RAPN procedures. The proposed system has also the potential to be employed in other surgical procedures such as hepatectomy or laparoscopic liver resection.
Purpose Intra-retinal delivery of novel sight-restoring therapies will require the precision of robotic systems accompanied by excellent visualisation of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) provides cross-sectional retinal images in real time but at the cost of image quality that is insufficient for intra-retinal therapy delivery.This paper proposes a super-resolution methodology that improves iOCT image quality leveraging spatiotemporal consistency of incoming iOCT video streams. Methods To overcome the absence of ground truth high-resolution (HR) images, we first generate HR iOCT images by fusing spatially aligned iOCT video frames. Then, we automatically assess the quality of the HR images on key retinal layers using a deep semantic segmentation model. Finally, we use image-to-image translation models (Pix2Pix and CycleGAN) to enhance the quality of LR images via quality transfer from the estimated HR domain. Results Our proposed methodology generates iOCT images of improved quality according to both full-reference and no-reference metrics. A qualitative study with expert clinicians also confirms the improvement in the delineation of pertinent layers and in the reduction of artefacts. Furthermore, our approach outperforms conventional denoising filters and the learning-based state-of-the-art. Conclusions The results indicate that the learning-based methods using the estimated, through our pipeline, HR domain can be used to enhance the iOCT image quality. Therefore, the proposed method can computationally augment the capabilities of iOCT imaging helping this modality support the vitreoretinal surgical interventions of the future.
Effective treatment of degenerative retinal diseases will require robot-assisted intraretinal therapy delivery supported by excellent retinal layer visualisation capabilities. Intra-operative Optical Coherence Tomography (iOCT) is an imaging modality which provides real-time, cross-sectional retinal images partially allowing visualisation of the layers where the sight restoring treatments should be delivered. Unfortunately, iOCT systems sacrifice image quality for high frame rates, making the identification of pertinent layers challenging. This paper proposes a Super-Resolution pipeline to enhance the quality of iOCT images leveraging information from iOCT 3D cube scans. We first explore whether 3D iOCT cube scans can indeed be used as high-resolution images by performing Image Quality Assessment. Then, we apply non-rigid image registration to generate partially aligned pairs, and we carry out data augmentation to increase the available training data. Finally, we use Cy-cleGAN to transfer the quality between low-resolution (LR) and highresolution (HR) domain. Quantitative analysis demonstrates that iOCT quality increases with statistical significance, but a qualitative study with expert clinicians is inconclusive with regards to their preferences.
Regenerative therapies have recently shown potential in restoring sight lost due to degenerative diseases. Their efficacy requires precise intra-retinal delivery, which can be achieved by robotic systems accompanied by high quality visualization of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) captures cross-sectional retinal images in real-time but with image quality that is inadequate for intraretinal therapy delivery. This paper proposes a two-stage super-resolution methodology that enhances the image quality of the low resolution (LR) iOCT images leveraging information from pre-operatively acquired highresolution (HR) OCT (preOCT) images. First, we learn the degradation process from HR to LR domain through CycleGAN and use it to generate pseudo iOCT (LR) images from the HR preOCT ones. Then, we train a Pix2Pix model on the pairs of pseudo iOCT and preOCT to learn the super-resolution mapping. Quantitative analysis using both full-reference and no-reference image quality metrics demonstrates that our approach clearly outperforms the learning-based state-of-the art techniques with statistical significance. Achieving iOCT image quality comparable to pre-OCT quality can help this medical imaging modality be established in vitreoretinal surgery, without requiring expensive hardware-related system updates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.