2018
DOI: 10.1007/978-3-030-01201-4_14
|View full text |Cite
|
Sign up to set email alerts
|

Learning to See Forces: Surgical Force Prediction with RGB-Point Cloud Temporal Convolutional Networks

Abstract: Robotic surgery has been proven to offer clear advantages during surgical procedures, however, one of the major limitations is obtaining haptic feedback. Since it is often challenging to devise a hardware solution with accurate force feedback, we propose the use of "visual cues" to infer forces from tissue deformation. Endoscopic video is a passive sensor that is freely available, in the sense that any minimally-invasive procedure already utilizes it. To this end, we employ deep learning to infer forces from v… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…The research on the training of CNNs for the image-based force estimation with more enriched input data is still in progress. In 2018, Gao et al [64] proposed a vision-based surgical force prediction model, called RGB-Point Cloud Temporal Convolutional Network (RPC-TCN), which combined RGB-D information with time-series analysis (see Fig. 15).…”
Section: Evolution Of Learning-based Methodsmentioning
confidence: 99%
“…The research on the training of CNNs for the image-based force estimation with more enriched input data is still in progress. In 2018, Gao et al [64] proposed a vision-based surgical force prediction model, called RGB-Point Cloud Temporal Convolutional Network (RPC-TCN), which combined RGB-D information with time-series analysis (see Fig. 15).…”
Section: Evolution Of Learning-based Methodsmentioning
confidence: 99%
“…In [102], the authors proposed a force estimation based on visual cues to infer tissue deformation, using a Temporal Convolutional Network (TCN). The input of the network are RGB and depth images collected using a Kinetic2 camera.…”
Section: Automation Of Surgical Tasksmentioning
confidence: 99%
“…This technique involves several different types of image processing methods, such as feature extraction, filtering of light reflections, and also can be approached using neural networks, but the crucial parts are usually the reconstruction of the tissue surface and handling the inhomogenity in the tissue. Based on the detected deformations of the targeted tissue surface, the applied force values can be estimated [33], [61]- [63], [65]. Despite the mentioned benefits of this technique, the implementation is extremely complex, and usually computationally intense.…”
Section: Palpation With Vision-based Force Estimationmentioning
confidence: 99%