2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops 2012
DOI: 10.1109/cvprw.2012.6239245
|View full text |Cite
|
Sign up to set email alerts
|

Learning features on robotic surgical tools

Abstract: Computer-aided surgical interventions in both manual and robotic procedures have been shown to improve patient outcomes and enhance the skills of the human physician. Tool tracking is one such example that has various applications. In this paper, we show how to learn fine-scaled features on surgical tools for the purpose of pose estimation. Our experiments analyze different state-of-the-art feature descriptors coupled with various learning algorithms on in-vivo data from a surgical robot. We propose that it is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…This is challenging for accurate absolute position sensing and requires time-consuming hand-eye calibration between the camera and the robot coordinates. On cable driven systems the absolute error can be up to 1 inch, which means the positional accuracy is potentially too low for tracking applications without visual correction [1] [3] . Recent developments in endoscopic computer vision have resulted in advanced approaches for 2D instrument detection for minimally invasive surgery.…”
Section: Introductionmentioning
confidence: 99%
“…This is challenging for accurate absolute position sensing and requires time-consuming hand-eye calibration between the camera and the robot coordinates. On cable driven systems the absolute error can be up to 1 inch, which means the positional accuracy is potentially too low for tracking applications without visual correction [1] [3] . Recent developments in endoscopic computer vision have resulted in advanced approaches for 2D instrument detection for minimally invasive surgery.…”
Section: Introductionmentioning
confidence: 99%
“…Tables III and IV show that the texture descriptors, in their standard form, do not appear to be capable of effectively distinguishing tools and tissue, and due to their significant computation time, they were not included in the feature vector used to train our RF. It is important to note that as shown in [18], descriptors can be used to identify specific points on the articulated tool, but our findings suggest that they are not suitable for whole tool identification. Further work is required to determine whether texture descriptors would be suitable to deliniating the instrument-tissue boundary rather than the tool body itself.…”
Section: A Feature Selection For Classificationmentioning
confidence: 77%
“…They provide an accurate, fast, and potentially parallelizable classification method and offer an easy extension to multiclass data, a useful feature for classifying multiple distinct tool or tissue types [18]. The success of RFs has been due to their 1 http://www.cs.ucl.ac.uk/staff/m.allan/ good generalization ability which increases with the number of trees in the forest combined with the robustness to noise that randomness provides.…”
Section: Appearance Learning With Random Forestsmentioning
confidence: 99%
“…This occurs because we only include one feature on the shaft (Pin4), but in the future we will look to include more shaft information. Concurrent with this work we performed a study [21] on the feature detection accuracy, where we obtained an average localization accuracy of 86%, although this varies depending on the feature type. We also note that although some feature types are not always detected, we need only ∼ 3 − 4 on a given frame because of the fusion, and so across the 7 chosen landmarks our experiments show that the percent correct achieved is sufficient for long-term tracking.…”
Section: Resultsmentioning
confidence: 99%