Background-Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence.Materials and Methods-We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens.Results-The ATE using 3cGAN was 6.2 +/− 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05).Conclusion-VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
Robotic-assisted orthopaedic procedures demand accurate spatial joint measurements. Tracking of human joint motion is challenging in many applications, such as in sport motion analyses. In orthopaedic surgery, these challenges are even more prevalent, where small errors may cause iatrogenic damage in patients-highlighting the need for robust and precise joint and instrument tracking methods. In this study, we present a novel kinematic modelling approach to track any anatomical points on the femur and / or tibia by exploiting optical tracking measurements combined with a priori computed tomography information. The framework supports simultaneous tracking of anatomical positions, from which we calculate the pose of the leg (joint angles and translations of both the hip and knee joints) and of each of the surgical instruments. Experimental validation on cadaveric data shows that our method is capable of measuring these anatomical regions with sub-millimetre accuracy, with a maximum joint angle uncertainty of ±0.47 o. This study is a fundamental step in robotic orthopaedic research, which can be used as a ground-truth for future research such as automating leg manipulation in orthopaedic procedures.
Robotic-assisted surgical procedures have recently increased in popularity in clinical environments. Applications of clinically approved surgical robots range from minimally invasive surgery to open joint replacements. In hip and knee orthopaedic procedures, access to leg joint cavities require constant manipulation of the patient's leg to a high degree of accuracy to reduce surgical injuries. This study develops a nine degree of freedom serial kinematic model of the human leg, using the well known Denavit Hartenberg Parameters, for robotic-assisted leg manipulation during orthopaedic leg surgery. The proposed model is validated through human cadaver experiments with an optical tracking system used as ground-truth to measure the leg pose. The knee and foot workspace for the model determines the pose of the leg and in comparing it to cadaver leg position. The positional error relative to the cadaver leg was found to be 0.43mm and 0.4mm respectively, with a maximum uncertainty of 3.51mm in the foot position. It demonstrates that the proposed model provides an accurate representation of the human leg motion for automated leg manipulation during orthopaedic surgery.
Navigation in endoscopic environments requires an accurate and robust localisation system. A key challenge in such environments is the paucity of visual features that hinders accurate tracking. This paper examines the performance of three image enhancement techniques for tracking under such feature-poor conditions including Contrast Limited Adaptive Histogram Specification (CLAHS), Fast Local Laplacian Filtering (LLAP) and a new combination of the two coined Local Laplacian of Specified Histograms (LLSH). Two cadaveric knee arthroscopic datasets and an underwater seabed inspection dataset are used for the analysis, where results are interpreted by defining visual saliency as the number of correctly matched key-point (SIFT and SURF) features. Experimental results show a significant improvement in contrast quality and feature matching performance when image enhancement techniques are used. Results also demonstrate the LLSHs ability to vastly improve SURF tracking performance indicating more than 87% of successfully matched frames. A comparative analysis provides some important insights useful in the design of vision-based navigation for autonomous agents in feature-poor environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.