2022
DOI: 10.1007/978-3-031-16449-1_37
|View full text |Cite
|
Sign up to set email alerts
|

CaRTS: Causality-Driven Robot Tool Segmentation from Vision and Kinematics Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…Prevalent challenges in this setting have to do with simulation of deformable surfaces [124], with much work focused on developing intelligent control policies for robotic surgery. Surgeons and intelligent systems must also contend with smoke from energized devices [25,38] and blood. Finally, in order to facilitate machine learning, it is desirable for simulation frameworks to provide various ground truth data by default, including segmentation maps, depth maps, object poses, camera pose, and more [98].…”
Section: Visible Light Imagingmentioning
confidence: 99%
See 2 more Smart Citations
“…Prevalent challenges in this setting have to do with simulation of deformable surfaces [124], with much work focused on developing intelligent control policies for robotic surgery. Surgeons and intelligent systems must also contend with smoke from energized devices [25,38] and blood. Finally, in order to facilitate machine learning, it is desirable for simulation frameworks to provide various ground truth data by default, including segmentation maps, depth maps, object poses, camera pose, and more [98].…”
Section: Visible Light Imagingmentioning
confidence: 99%
“…They collect real image and kinematic data using the dVRK and utilize the kinematic data to produce image data of virtual tools using a dVRK simulator [134]. Similarly, Ding et al [38] introduces a vision-and kinematics-based approach to robot tool segmentation based on a complementary causal model that preserves accuracy under domain shift to unseen domains. They collect a counterfactual dataset, where individual instances only differ by the presence or absence of a specific source of corruption, using a technique similar to that in [133], further demonstrating the utility of this type of data collection while reporting similar challenges, namely the lack of tool-to-tissue interaction through the re-play paradigm.…”
Section: Alternative Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…On the robustness of medical procedures (Ding et al, 2022), built a causal tool segmentation model that iteratively aligns tool masks with observations. Unable to deal with occlusions and without leveraging temporal information the authors of this recent work also comment on the future next steps of robust causal machine learning tools.…”
Section: Out Of Distribution Robustness and Detectionmentioning
confidence: 99%
“…phase recognition [22,2,9], action detection [14,15], or tool detection [7,4], these approaches do not focus on holistic OR understanding. Most recently, Özsoy et al [16] proposed a new dataset, 4D-OR, and an approach for holistic OR modeling.…”
Section: Introductionmentioning
confidence: 99%