2012
DOI: 10.1007/978-3-642-33418-4_73
|View full text |Cite
|
Sign up to set email alerts
|

Feature Classification for Tracking Articulated Surgical Tools

Abstract: Abstract. Tool tracking is an accepted capability for computer-aided surgical intervention which has numerous applications, both in robotic and manual minimally-invasive procedures. In this paper, we describe a tracking system which learns visual feature descriptors as class-specific landmarks on an articulated tool. The features are localized in 3D using stereo vision and are fused with the robot kinematics to track all of the joints of the dexterous manipulator. Experiments are performed using previously-col… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
66
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 65 publications
(66 citation statements)
references
References 18 publications
0
66
0
Order By: Relevance
“…Some other vision-based methods exploit the geometric constraints [8] and the gradientlike features [9,10], in order to identify the shaft of instrument, but fail to provide more accurate 3D positions of the instrument tip. Machine learning techniques [11][12][13][14][15][16][17][18][19] introduced into the instrument detection and tracking provide training of their discriminative classifiers/models according to the input visual features of the foreground (instrument tip or shaft). Edge pixel features [11] and fast corner features [12] are utilized to train the appearance models of surgical instrument based on the likelihood map.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Some other vision-based methods exploit the geometric constraints [8] and the gradientlike features [9,10], in order to identify the shaft of instrument, but fail to provide more accurate 3D positions of the instrument tip. Machine learning techniques [11][12][13][14][15][16][17][18][19] introduced into the instrument detection and tracking provide training of their discriminative classifiers/models according to the input visual features of the foreground (instrument tip or shaft). Edge pixel features [11] and fast corner features [12] are utilized to train the appearance models of surgical instrument based on the likelihood map.…”
Section: Introductionmentioning
confidence: 99%
“…Some state-of-the-art image feature descriptors, e.g. region covariance (Covar) [14], scale invariant feature transform (SIFT) [15,16] and histogram of oriented gradients (HoG) [17], are used to establish the surgical instrument model in tracking and coupled with some traditional classifiers such as support vector machine (SVM), randomized tree (RT) and so on. The Bayesian sequential estimation was also applied to the surgical instrument tracking via the active testing model [18].…”
Section: Introductionmentioning
confidence: 99%
“…Its goal is to provide accurate 2D or 3D location estimates of surgical instruments from visual data. Critical to a number of applications such as automatic endoscope control [1], instrument-surface detection [2], clinician training evaluation [3] or setting virtual constraints for instrument motion [4,5], instrument detection and tracking can significantly augment the clinicians experience during surgical procedures.…”
Section: Introductionmentioning
confidence: 99%
“…Among the many approaches proposed in the last twenty years [1,2,6,7,8,9], recent detection-based schemes that rely on building statistical classifiers to evaluate the presence of the instrument appear to be the most promising for in-vivo detection and tracking [4,5,10]. Within this last category of methods, Reiter et al [5] combined a multiclass Random Forest (RF) [11] labelling approach with robot kinematic information to estimate the instrument 3D pose. In [4], RFs were also used to handle instrument-background classification, giving way to instrument segmentations and the 3D pose.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation