Minimally invasive beating-heart surgery offers substantial benefits for the patient, compared to conventional open surgery. Nevertheless, the motion of the heart poses increased requirements to the surgeon. To support the surgeon, algorithms for an advanced robotic surgery system are proposed, which offer motion compensation of the beating heart. This implies the measurement of heart motion, which can be achieved by tracking natural landmarks. In most cases, the investigated affine tracking scheme can be reduced to an efficient block matching algorithm allowing for realtime tracking of multiple landmarks. Fourier analysis of the motion parameters shows two dominant peaks, which correspond to the heart and respiration rates of the patient. The robustness in case of disturbance or occlusion can be improved by specially developed prediction schemes. Local prediction is well suited for the detection of single tracking outliers. A global prediction scheme takes several landmarks into account simultaneously and is able to bridge longer disturbances. As the heart motion is strongly correlated with the patient's electrocardiogram and respiration pressure signal, this information is included in a novel robust multisensor prediction scheme. Prediction results are compared to those of an artificial neural network and of a linear prediction approach, which shows the superior performance of the proposed algorithms.
This preliminary study suggests that robot-guided drilling of a minimally invasive approach to the cochlea might be feasible, but further improvements are necessary before any clinical application becomes possible. Where the width of the facial recess is less than 2.5 mm, the chorda tympani nerve and the ossicles are at risk.
Abstract-Minimally invasive surgery (MIS) challenges the surgeon's skills due to his/her separation from the operation area, which can be reached with long instruments only. Therefore, the surgeon looses access to the manipulation forces inside the patient. This reduces his/her dexterity when performing the operation. A new compact and lightweight robot for MIS is presented, which allows for the measurement of manipulation forces. The main advantage of this concept is that no miniaturized force sensor has to be integrated into surgical instruments and inserted into the patient. Rather, outside the patient, a standard sensor is attached to a modified trocar, which allows for the undisturbed measurement of manipulation forces. This approach reduces costs and sterilizability demands. Results of in vitro and in vivo force control experiments are presented to validate the concepts.Index Terms-Force control, force measurement, minimally invasive surgery (MIS).
Purpose Automated segmentation of anatomical structures in medical image analysis is a prerequisite for autonomous diagnosis as well as various computer and robot aided interventions. Recent methods based on deep convolutional neural networks (CNN) have outperformed former heuristic methods. However, those methods were primarily evaluated on rigid, real-world environments. In this study, existing segmentation methods were evaluated for their use on a new dataset of transoral endoscopic exploration. Methods Four machine learning based methods SegNet, UNet, ENet and ErfNet were trained with supervision on a novel 7-class dataset of the human larynx. The dataset contains 536 manually segmented images from two patients during laser incisions. The Intersection-over-Union (IoU) evaluation metric was used to measure the accuracy of each method. Data augmentation and network ensembling were employed to increase segmentation accuracy. Stochastic inference was used to show uncertainties of the individual models. Patient-to-patient transfer was investigated using patient-specific fine-tuning. Results In this study, a weighted average ensemble network of UNet and ErfNet was best suited for the segmentation of laryngeal soft tissue with a mean IoU of 84.7 %. The highest efficiency was achieved by ENet with a mean inference time of 9.22 ms per image. It is shown that 10 additional images from a new patient are sufficient for patient-specific fine-tuning. Conclusion CNN-based methods for semantic segmentation are applicable to endoscopic images of laryngeal soft tissue. The segmentation can be used for active constraints or to monitor morphological changes and autonomously detect pathologies. Further improvements could be achieved by using a larger dataset or training the models in a self-supervised manner on additional unlabeled data.
This paper presents a novel system for accurate placement of pedicle screws. The system consists of a new light-weight (<10 kg), kinematically redundant, and fully torque controlled robot. Additionally, the pose of the robot tool-center point is tracked by an optical navigation system, serving as an external reference source. Therefore, it is possible to measure and to compensate deviations between the intraoperative and the preoperatively planned pose. The robotic arm itself is impedance controlled. This allows for a new intuitive manmachine-interface as the joint units are equipped with torque sensors: the robot can be moved just by pulling/pushing its structure. The surgeon has full control of the robot at every step of the intervention. The hand-eye-coordination problems known from manual pedicle screw placement can be omitted.
Image-guided robots have been widely used for bone shaping and percutaneous access to interventional sites. However, due to high-accuracy requirements and proximity to sensitive nerves and brain tissues, the adoption of robots in inner-ear surgery has been slower. In this paper the authors present their recent work towards developing two image-guided industrial robot systems for accessing challenging inner-ear targets. Features of the systems include optical tracking of the robot base and tool relative to the patient and Kalman filter-based data fusion of redundant sensory information (from encoders and optical tracking systems) for enhanced patient safety. The approach enables control of differential robot positions rather than absolute positions, permitting simplified calibration procedures and reducing the reliance of the system on robot calibration in order to ensure overall accuracy. Lastly, the authors present the results of two phantom validation experiments simulating the use of image-guided robots in inner-ear surgeries such as cochlear implantation and petrous apex access.
Abstract-This video presents the in-house developed DLR MiroSurge robotic system for surgery. As shown, the system is suitable for both minimally invasive and open surgery. Essential part of the system is the MIRO robot: The soft robotics feature enables intuitive interaction with the robot. In the presented minimally invasive robotic setup three MIROs guide an endoscopic stereo camera and two endoscopic instruments with force feedback sensors. The master console for teleoperation consists of an autostereoscopic monitor and force reflecting input devices for both hands. Versatility is shown with two additional applications: For assistance in manual minimally invasive surgery a MIRO robot automatically guides the endoscope such that the surgical instrument is always in view. In a biopsy application the MIRO robot is positioning the needle with navigation system support.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.