2013
DOI: 10.1109/tro.2013.2256690
|View full text |Cite
|
Sign up to set email alerts
|

Intensity-Based Ultrasound Visual Servoing: Modeling and Validation With 2-D and 3-D Probes

Abstract: Abstract-In this paper, we present an ultrasound (US) visual servoing to control a robotic system equipped with a US probe. To avoid the difficult and time-consuming image segmentation process, we develop a new approach taking as visual input directly the intensity of the image pixels. The analytic form of the interaction matrix that relates the variation of the intensity features to the motion of the probe is established and used to control the six degrees of freedom (dof) of the robotic system. Our approach … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
1
1

Relationship

2
6

Authors

Journals

citations
Cited by 38 publications
(31 citation statements)
references
References 22 publications
0
31
0
Order By: Relevance
“…Janvier et al (2014) used a robotic system to autonomously reconstruct a 3D US image of the arteries from the iliac in the lower abdomen down to the popliteal behind the knee; Pahl and Supriyanto (2015) used linear robotic stages to enable autonomous transabdominal ultrasonography of the cervix; Vitrani et al (2005Vitrani et al ( , 2007 implemented an US imagebased visual servoing algorithm for autonomous guidance of an instrument during intracardiac surgery; Mebarki et al (2008Mebarki et al ( , 2010 utilized the concept of US image moments for autonomous visual servoing; Novotny et al (2007) presented a real-time 3D US image-based visual servoing method to guide a surgical instrument to a tracked target location; Abolmaesumi et al (2000) investigated the feasibility of visual servoing for motion in the plane of the US probe in one dimension; Nadeau and Krupa developed US image-based visual servoing techniques, which use image intensities (Nadeau and Krupa, 2013) and moments based on image features (Nadeau and Krupa, 2010); Sauvée et al (2008) introduced visual servoing of instrument motion based on US images through non-linear model predictive control; and Stoll et al (2006) used a line detection algorithm and a passive instrument marker to provide real-time 3D US-based visual servoing of surgical instruments. Our approach differs from those systems because it implements a cooperative control scheme that fuses the US image acquired by an expert, the US probe position, and the contact force on the patient to enable reproducible probe placement -therefore reproducible soft-tissue deformation -with respect to the target organ.…”
Section: Introductionmentioning
confidence: 99%
“…Janvier et al (2014) used a robotic system to autonomously reconstruct a 3D US image of the arteries from the iliac in the lower abdomen down to the popliteal behind the knee; Pahl and Supriyanto (2015) used linear robotic stages to enable autonomous transabdominal ultrasonography of the cervix; Vitrani et al (2005Vitrani et al ( , 2007 implemented an US imagebased visual servoing algorithm for autonomous guidance of an instrument during intracardiac surgery; Mebarki et al (2008Mebarki et al ( , 2010 utilized the concept of US image moments for autonomous visual servoing; Novotny et al (2007) presented a real-time 3D US image-based visual servoing method to guide a surgical instrument to a tracked target location; Abolmaesumi et al (2000) investigated the feasibility of visual servoing for motion in the plane of the US probe in one dimension; Nadeau and Krupa developed US image-based visual servoing techniques, which use image intensities (Nadeau and Krupa, 2013) and moments based on image features (Nadeau and Krupa, 2010); Sauvée et al (2008) introduced visual servoing of instrument motion based on US images through non-linear model predictive control; and Stoll et al (2006) used a line detection algorithm and a passive instrument marker to provide real-time 3D US-based visual servoing of surgical instruments. Our approach differs from those systems because it implements a cooperative control scheme that fuses the US image acquired by an expert, the US probe position, and the contact force on the patient to enable reproducible probe placement -therefore reproducible soft-tissue deformation -with respect to the target organ.…”
Section: Introductionmentioning
confidence: 99%
“…A more extensive version of this method was studied in , where the method was tested using a six‐DOF robot holding a US probe and tracking a user‐specified region in the US image when a phantom was randomly moved by hand. Nadeau and Krupa also developed an intensity‐based technique to perform visual servoing using 2D and 3D US probes. They implemented hybrid vision/force control to perform positioning and tracking tasks by using the Jacobian matrix relating time variation of pixel intensities to the probe velocity, while regulating the force of probe–tissue interaction to a constant value.…”
Section: Methodsmentioning
confidence: 99%
“…By extracting features from live 2D or 3D images, tracking of various anatomies such as the carotid artery as well as of surgical tools has been achieved based on ultrasound [16,17]. When coupled with a robot control scheme, applications such as organ motion compensation [18] and visibility maintenance in tele-operation [19] become feasible.…”
Section: Related Workmentioning
confidence: 99%