Background-Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence.Materials and Methods-We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens.Results-The ATE using 3cGAN was 6.2 +/− 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05).Conclusion-VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
Robotic-assisted orthopaedic procedures demand accurate spatial joint measurements. Tracking of human joint motion is challenging in many applications, such as in sport motion analyses. In orthopaedic surgery, these challenges are even more prevalent, where small errors may cause iatrogenic damage in patients-highlighting the need for robust and precise joint and instrument tracking methods. In this study, we present a novel kinematic modelling approach to track any anatomical points on the femur and / or tibia by exploiting optical tracking measurements combined with a priori computed tomography information. The framework supports simultaneous tracking of anatomical positions, from which we calculate the pose of the leg (joint angles and translations of both the hip and knee joints) and of each of the surgical instruments. Experimental validation on cadaveric data shows that our method is capable of measuring these anatomical regions with sub-millimetre accuracy, with a maximum joint angle uncertainty of ±0.47 o. This study is a fundamental step in robotic orthopaedic research, which can be used as a ground-truth for future research such as automating leg manipulation in orthopaedic procedures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.