2019
DOI: 10.48550/arxiv.1902.06426
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

2017 Robotic Instrument Segmentation Challenge

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
121
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 51 publications
(121 citation statements)
references
References 0 publications
0
121
0
Order By: Relevance
“…For scanpath, compare the top rank and all average instruments priority between ground-truth and prediction. Overall evaluation is done by using cross-validation and testing dataset of MICCAI endoscopic vision challenge [2].…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…For scanpath, compare the top rank and all average instruments priority between ground-truth and prediction. Overall evaluation is done by using cross-validation and testing dataset of MICCAI endoscopic vision challenge [2].…”
Section: Resultsmentioning
confidence: 99%
“…Subsequently, a holistically nested CNN approach ToolNet [19], FCN with affine transformation, joint CNN, and recurrent neural network (RNN) are utilized to track the surgical instrument. Nonetheless, the performance of these models is not satisfactory, especially for the instrument type segmentation [2]. Moreover, with less computational resources, achieving multiple tasks from a single model is more competent.…”
Section: A Related Workmentioning
confidence: 99%
See 3 more Smart Citations