2018
DOI: 10.1007/978-3-030-00937-3_29
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning for Surgical Gesture Segmentation and Classification

Abstract: Recognition of surgical gesture is crucial for surgical skill assessment and efficient surgery training. Prior works on this task are based on either variant graphical models such as HMMs and CRFs, or deep learning models such as Recurrent Neural Networks and Temporal Convolutional Networks. Most of the current approaches usually suffer from over-segmentation and therefore low segment-level edit scores. In contrast, we present an essentially different methodology by modeling the task as a sequential decision-m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
55
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(57 citation statements)
references
References 15 publications
2
55
0
Order By: Relevance
“…The real kinematic data were then replaced with virtual kinematic signals estimated from the video data through a spatial CNN, addressing the situation where kinematic information is available during . c: Gesture recognition with reinforcement learning [43]. At each time stamp, predictions are generated over K frames ahead.…”
Section: Conditional Random Fieldsmentioning
confidence: 99%
“…The real kinematic data were then replaced with virtual kinematic signals estimated from the video data through a spatial CNN, addressing the situation where kinematic information is available during . c: Gesture recognition with reinforcement learning [43]. At each time stamp, predictions are generated over K frames ahead.…”
Section: Conditional Random Fieldsmentioning
confidence: 99%
“…The JIG-SAWS dataset has extended annotations at a sub-task level. AI techniques learn patterns and temporal interconnections of the sub-task sequences from combinations of robot kinematics and surgical video and detect and temporally localize each sub-task [19][20][21][22][23][24]. Recently, AI models for activity recognition have been developed and tested on annotated datasets from real cases of robotic-assisted radical prostatectomy and ocular microsurgery [18][19][20].…”
Section: Surgical Phase Recognitionmentioning
confidence: 99%
“…Their model is trainable with weak annotations that needs 3D bounding boxes for all instances and full voxel annotations for only a small fractions of instances. Liu et al [164] employed a novel deep reinforcement learning approach for the segmentation and classification of surgical gesture. Their approach performs well on JIGSAW dataset in terms of edit score as compared to previous similar works.…”
Section: Miscellaneousmentioning
confidence: 99%