Purpose Fluoroscopy is the standard imaging modality used to guide hip surgery and is therefore a natural sensor for computer-assisted navigation. In order to efficiently solve the complex registration problems presented during navigation, human-assisted annotations of the intraoperative image are typically required. This manual initialization interferes with the surgical workflow and diminishes any advantages gained from navigation. In this paper we propose a method for fully automatic registration using anatomical annotations produced by a neural network.Methods Neural networks are trained to simultaneously segment anatomy and identify landmarks in fluoroscopy. Training data is obtained using a computationallyintensive, intraoperatively incompatible, 2D/3D registration of the pelvis and each femur. Ground truth 2D segmentation labels and anatomical landmark locations are established using projected 3D annotations. Intraoperative registration couples a traditional intensitybased strategy with annotations inferred by the network and requires no human assistance.
Objective: State of the art navigation systems for pelvic osteotomies use optical systems with external fiducials. We propose the use of X-Ray navigation for pose estimation of periacetabular fragments without fiducials. Methods: A 2D/3D registration pipeline was developed to recover fragment pose. This pipeline was tested through an extensive simulation study and 6 cadaveric surgeries. Using osteotomy boundaries in the fluoroscopic images, the preoperative plan is refined to more accurately match the intraoperative shape. Results: In simulation, average fragment pose errors were 1.3°/1.7 mm when the planned fragment matched the intraoperative fragment, 2.2°/2.1 mm when the plan was not updated to match the true shape, and 1.9°/2.0 mm when the fragment shape was intraoperatively estimated. In cadaver experiments, the average pose errors were 2.2°/2.2 mm, 3.8°/2.5 mm, and 3.5°/2.2 mm when registering with the actual fragment shape, a preoperative plan, and an intraoperatively refined plan, respectively. Average errors of the lateral center edge angle were less than 2°for all fragment shapes in simulation and cadaver experiments. Conclusion: The proposed pipeline is capable of accurately reporting femoral head coverage within a range clinically identified for long-term joint survivability. Significance: Human interpretation of fragment
Dexterous continuum manipulators (DCMs) can largely increase the reachable region and steerability for minimally and less invasive surgery. Many such procedures require the DCM to be capable of producing large deflections. The real-time control of the DCM shape requires sensors that accurately detect and report large deflections. We propose a novel, large deflection, shape sensor to track the shape of a 35 mm DCM designed for a less invasive treatment of osteolysis. Two shape sensors, each with three fiber Bragg grating sensing nodes is embedded within the DCM, and the sensors’ distal ends fixed to the DCM. The DCM centerline is computed using the centerlines of each sensor curve. An experimental platform was built and different groups of experiments were carried out, including free bending and three cases of bending with obstacles. For each experiment, the DCM drive cable was pulled with a precise linear slide stage, the DCM centerline was calculated, and a 2D camera image was captured for verification. The reconstructed shape created with the shape sensors is compared with the ground truth generated by executing a 2D–3D registration between the camera image and 3D DCM model. Results show that the distal tip tracking accuracy is 0.40 ± 0.30 mm for the free bending and 0.61 ± 0.15 mm, 0.93 ± 0.05 mm and 0.23 ± 0.10 mm for three cases of bending with obstacles. The data suggest FBG arrays can accurately characterize the shape of large-deflection DCMs.
Purpose-Minimally invasive alternatives are now available for many complex surgeries. These approaches are enabled by the increasing availability of intra-operative image guidance. Yet, fluoroscopic X-rays suffer from projective transformation and thus cannot provide direct views onto anatomy. Surgeons could highly benefit from additional information, such as the anatomical landmark locations in the projections, to support intra-operative decision making. However, detecting landmarks is challenging since the viewing direction changes substantially between views leading to varying appearance of the same landmark. Therefore, and to the best of our knowledge, view-independent anatomical landmark detection has not been investigated yet. Methods-In this work, we propose a novel approach to detect multiple anatomical landmarks in X-ray images from arbitrary viewing directions. To this end, a sequential prediction framework based on convolutional neural networks is employed to simultaneously regress all landmark locations. For training, synthetic X-rays are generated with a physically accurate forward model that allows direct application of the trained model to real X-ray images of the pelvis. View invariance is achieved via data augmentation by sampling viewing angles on a spherical segment of 120° × 90°. Results-On synthetic data, a mean prediction error of 5.6 ± 4.5mm is achieved. Further, we demonstrate that the trained model can be directly applied to real X-rays and show that these detections define correspondences to a respective CT volume, which allows for analytic estimation of the 11 degree of freedom projective mapping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.