2012
DOI: 10.1007/978-3-642-33418-4_70
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Visual Tracking in Retinal Microsurgery

Abstract: Abstract. In the context of retinal microsurgery, visual tracking of instruments is a key component of robotics assistance. The difficulty of the task and major reason why most existing strategies fail on in-vivo image sequences lies in the fact that complex and severe changes in instrument appearance are challenging to model. This paper introduces a novel approach, that is both data-driven and complementary to existing tracking techniques. In particular, we show how to learn and integrate an accurate detector… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
74
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 52 publications
(74 citation statements)
references
References 10 publications
0
74
0
Order By: Relevance
“…1(c) illustrates the evolution of stage responses of each instrument part after evaluating the patch from (a) using a learned classifier. As in [10], these features can easily be extended to be rotationally invariant and generalize well to variations across image sequences.…”
Section: Detection Frameworkmentioning
confidence: 99%
See 4 more Smart Citations
“…1(c) illustrates the evolution of stage responses of each instrument part after evaluating the patch from (a) using a learned classifier. As in [10], these features can easily be extended to be rotationally invariant and generalize well to variations across image sequences.…”
Section: Detection Frameworkmentioning
confidence: 99%
“…Weighted averaging [10] on the response scores is then performed for each class to estimate the position of the different parts of the instrument, allowing the instrument center and orientation to be extracted from the part labels. In our experiments, we ran RANSAC with 500 sampling rounds and let inliers be points within 24 pixels of the model, i.e.…”
Section: Instrument Pose Estimationmentioning
confidence: 99%
See 3 more Smart Citations