Abstract. Minimally invasive CT-guided interventions are an attractive option for diagnostic biopsy and localized therapy delivery. This paper describes the concept of a new prototypical robotic tool, developed in a preliminary study for radiological image-guided interventions. Its very compact and light design is optimized for usage inside a CT-gantry, where a bulky robot is inappropriate, especially together with a stout patient and long stiff instruments like biopsy needles or a trocar. Additionally, a new automatic image-guided control based on "visual servoing" is presented for automatic and uncalibrated needle placement under CT-fluoroscopy. Visual servoing is well established in the field of industrial robotics, when using CCD cameras. We adapted this approach and optimized it for CT-fluoroscopy-guided interventions. It is a simple and accurate method which requires no prior calibration or registration. Therefore, no additional sensors (infrared, laser, ultrasound, etc), no stereotactic frame and no additional calibration phantom is needed. Our technique provides accurate 3D alignment of the needle with respect to an anatomic target. A first evaluation of the robot using CT fluoroscopy showed an accuracy in needle placement of ±0.4 mm (principle accuracy) and ±1.6 mm in a small pig study. These first promising results present our method as a possible alternative to other needle placement techniques requiring cumbersome and time consuming calibration procedures.
This paper presents a new approach to image-based guidance of a needle or surgical tool during percutaneous procedures. The method is based on visual servoing. It requires no prior calibration or registration. The technique provides highly precise 3D-alignment of the tool with respect to an anatomic target. By taking advantage of projective geometry and projective invariants, this can be achieved in a fixed number (12) of iterations. In addition the approach estimates the required insertion depth. Experiments include automatic 3D alignment and insertion of a needle held by a medical robot into a pig kidney under X-ray fluoroscopy.
Visual servoing is well established in the field of industrial robotics, when using CCD cameras. This paper describes one of the first medical implementations of uncalibrated visual servoing. To our knowledge, this is the first time that visual 5crvomg is done using X-ray fluoroscopy. In this paper we present a new image based approach for semi-automatically guidance of a needle or surgical tool during percutaneous procedures and is based on a series of granted and pending US patent applications [1][21. It is a simple and accurate method which requires no prior calibration or registration. Therefore, no additional sensors (infrared, laser, ultrasound, MRI, etc), no stereotactic frame and no additional calibration phantom is needed. Our technique provides accurate 3D alignment of the tool with respect to an anatomic target and estimates the required insertion depth. We implemented and verified this method with three different medical robots at the Computer Integrated Surgery (CIS) Lab at the Johns Hopkins University. First tests were performed using a CCD-camera and a mobile uniplanar X-ray fluoroscope as imaging modality. We used small metal balls of 4 mm in diameter as target points. These targets were placed 60 to 70 mm deep inside a test-phantom. Our method led to correct insertions with mean deviation of 0.20 mm with CCD camera and mean deviation of about 1.5 mm in clinical surrounding with an old X-ray imaging system, where the images were not of best quality. These promising results present this method as a serious alternative to other fleedie placement techniques, which require cumbersome and time consuming calibration procedures.
This paper presents an approach for image-based guidance of a surgical tool towards multiple targets from fixed or variable entry points. The method is based on visual servoing. It requires no prior calibration or registration. By taking advantage of projective invariants, precise needle alignment to a target can be achieved in a fixed number (12) of iterations. Alignment to n targets can be performed in 6*(n+1) iterations. Elements of error analysis and a discussion of the "optimal" placement of the planes used in the method are given. We also show how the approach can be used to estimate the entry point and orientation to reach an anatomic target while passing through a given anatomic landmark.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.