2016
DOI: 10.1155/2016/1067509
|View full text |Cite
|
Sign up to set email alerts
|

CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery

Abstract: Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Condit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 17 publications
0
11
0
Order By: Relevance
“…• microscopy, for neurosurgery (Leppänen et al, 2018), retinal surgery (Alsheakhali et al, 2016b;Rieke et al, 2016a;Kurmann et al, 2017;Laina et al, 2017) and cataract surgery (Al Hajj et al, 2017a),…”
Section: Clinical Applicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…• microscopy, for neurosurgery (Leppänen et al, 2018), retinal surgery (Alsheakhali et al, 2016b;Rieke et al, 2016a;Kurmann et al, 2017;Laina et al, 2017) and cataract surgery (Al Hajj et al, 2017a),…”
Section: Clinical Applicationsmentioning
confidence: 99%
“…For flexible instruments, the goal is also to detect the tool centerline (Chang et al, 2016). Tool detection generally is an intermediate step for tool tracking, the process of monitoring tool location over time (Du et al, 2016;Rieke et al, 2016a;Lee et al, 2017b;Zhao et al, 2017;Czajkowska et al, 2018;Ryu et al, 2018;Keller et al, 2018), and pose estimation, the process of inferring a 2-D pose (Rieke et al, 2016b;Kurmann et al, 2017;Alsheakhali et al, 2016b;Du et al, 2018;Wesierski and Jezierska, 2018) or a 3-D pose (Allan et al, 2018;Gessert et al, 2018) based on the location of tool elements. Tasks associated with tool detection also include velocity estimation (Marban et al, 2017) and instrument state recognition (Sahu et al, 2016a).…”
Section: Computer Vision Tasksmentioning
confidence: 99%
“…To evaluate our approach, we propose two experiments: (1) Uses the same training and test data as in the original challenge, with an unknown tool in the test set. (2) We modified the training and test sets, such that the left scissor is also available during training by moving sequence 6 of the test set to the training set. By flipping the images in this sequence left-to-right, we augment our training data so to have the right scissor as well.…”
Section: Methodsmentioning
confidence: 99%
“…Vision-based detection of surgical instruments in both minimally invasive surgery and microsurgery has gained increasing popularity in the last decade. This is largely due to the potential it holds for more accurate guidance of surgical robots such as the da Vinci R (Intuitive Surgical, USA) and Preceyes (Netherlands), as well as for directing imaging technology such as endoscopes [13] or OCT imaging [2] at manipulated regions of the workspace.…”
Section: Introductionmentioning
confidence: 99%
“…At a granular level, tracking changes in tissue during surgery is more challenging with deformable soft tissue [75], [76] than with rigid anatomical structures such as the paranasal sinuses [77]. Beyond the patient, several methods to estimate motion or changes in pose of surgical instruments using video images and/or kinematics have been developed [78], for example, in minimally invasive surgery [79], [80], open surgery [81], microsurgery [82], endoscopy [83], bronchoscopy [84], and laser surgery [85]. Relative positions and interaction between surgical instruments may be recognized and tracked using sensors such as radiofrequency identification [86] or using video images [87], [88].…”
Section: Examples and Potential Applicationsmentioning
confidence: 99%