Subretinal injection is a delicate and complex microsurgery, which requires surgeons to inject the therapeutic substance in a pre-operatively defined and intra-operatively updated subretinal target area. Due to the lack of subretinal visual feedback, it is hard to sense the insertion depth during the procedure, thus affecting the results of surgical outcome and hindering the widespread use of this treatment. This paper presents a novel approach to estimate the 3D position of the needle under the retina using the information from microscope-integrated Intraoperative Optical Coherence Tomography (iOCT). We evaluated our approach on both tissue phantom and ex-vivo porcine eyes. Evaluation results show that the average error in distance measurement is 4.7 μm (maximum of 16.5 μm). We furthermore, verified the feasibility of the proposed method to track the insertion depth of needle in robotassisted subretinal injection.
Sonic interaction as a technique for conveying information has advantages over conventional visual augmented reality methods specially when augmenting the visual field with extra information brings distraction. Sonification of knowledge extracted by applying computational methods to sensory data is a well-established concept. However, some aspects of sonic interaction design such as aesthetics, the cognitive effort required for perceiving information, and avoiding alarm fatigue are not well studied in literature. In this work, we present a sonification scheme based on employment of physical modeling sound synthesis which targets focus demanding tasks requiring extreme precision. Proposed mapping techniques are designed to require minimum training for users to adapt to and minimum mental effort to interpret the conveyed information. Two experiments are conducted to assess the feasibility of the proposed method and compare it against visual augmented reality in high precision tasks. The observed quantitative results suggest that utilizing sound patches generated by physical modeling achieve the desired goal of improving the user experience and general task performance with minimal training.
Intraoperative optical coherence tomography is still not overly pervasive in routine ophthalmic surgery, despite evident clinical benefits. That is because today’s spectral-domain optical coherence tomography systems lack flexibility, acquisition speed, and imaging depth. We present to the best of our knowledge the most flexible swept-source optical coherence tomography (SS-OCT) engine coupled to an ophthalmic surgical microscope that operates at MHz A-scan rates. We use a MEMS tunable VCSEL to implement application-specific imaging modes, enabling diagnostic and documentary capture scans, live B-scan visualizations, and real-time 4D-OCT renderings. The technical design and implementation of the SS-OCT engine, as well as the reconstruction and rendering platform, are presented. All imaging modes are evaluated in surgical mock maneuvers using ex vivo bovine and porcine eye models. The applicability and limitations of MHz SS-OCT as a visualization tool for ophthalmic surgery are discussed.
Needle segmentation is a fundamental step for needle reconstruction and image-guided surgery. Although there has been success stories in needle segmentation for non-microsurgeries, the methods cannot be directly extended to ophthalmic surgery due to the challenges bounded to required spatial resolution. As the ophthalmic surgery is performed by finer and smaller surgical instruments in micro-structural anatomies, specifically in retinal domains, difficulties are raised for delicate operation and sensitive perception. To address these challenges, in this paper we investigate needle segmentation in ophthalmic operation on 60 Optical Coherence Tomography (OCT) cubes captured during needle injection surgeries on ex-vivo pig eyes. Furthermore, we developed two different approaches, a conventional method based on morphological features (MF) and a specifically designed full convolution neural networks (FCN) method, moreover, we evaluate them on the benchmark for needle segmentation in the volumetric OCT images. The experimental results show that FCN method has a better segmentation performance based on four evaluation metrics while MF method has a short inference time, which provides valuable reference for future works.
Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Conditional Random Field (CRF) where each part of the instrument is detected separately. The relations between these parts are modeled to capture the translation, rotation, and the scale changes of the instrument. The tracking is done via separate detection of instrument parts and evaluation of confidence via the modeled dependence functions. In case of low confidence feedback an automatic recovery process is performed. The algorithm is evaluated on in vivo ophthalmic surgery datasets and its performance is comparable to the state-of-the-art methods with the advantage that no manual reinitialization is needed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.