2021
DOI: 10.1109/tbme.2021.3054828
|View full text |Cite
|
Sign up to set email alerts
|

Gesture Recognition in Robotic Surgery: A Review

Abstract: Surgical activity recognition is a fundamental step in computer-assisted interventions. This paper reviews the state-of-the-art in methods for automatic recognition of fine-grained gestures in robotic surgery focusing on recent data-driven approaches and outlines the open questions and future research directions. Methods: An article search was performed on 5 bibliographic databases with combinations of the following search terms: robotic, robot-assisted, JIGSAWS, surgery, surgical, gesture, fine-grained, surge… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
48
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 88 publications
(57 citation statements)
references
References 79 publications
1
48
0
Order By: Relevance
“…The presented approach focuses on the task state estimation of a known task series. This approach could be extended by surgical gesture recognition inferring which task the surgeon aims to complete ( van Amsterdam et al, 2021 ). This would allow for more flexible task series.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The presented approach focuses on the task state estimation of a known task series. This approach could be extended by surgical gesture recognition inferring which task the surgeon aims to complete ( van Amsterdam et al, 2021 ). This would allow for more flexible task series.…”
Section: Discussionmentioning
confidence: 99%
“…However, VFs have to be parameterized according to the context of the task at hand in order to provide reasonable assistance. Due to the unstructured dynamic environment inside the patient’s body, automatic perception of the procedural context in a real surgery remains challenging ( van Amsterdam et al, 2021 ). Surgical training, in turn, offers a well-defined environment.…”
Section: Introductionmentioning
confidence: 99%
“…In [176], the legal implications of using AI for automation in surgical practice are discussed, while virtual and augmented reality in robotic surgery are reviewed in [51]. A recent review of gesture analysis in surgical robotics summarized the state of the art in this field [266].…”
Section: Reviewsmentioning
confidence: 99%
“…In this case, kinematic measurements can be obtained, for example, via wireless sensors [34], electromagnetic sensors [35], and optical and camera trackers [36]. Nonsystematic review [15][16][17] Research [18][19][20][21] Validation of measurements [18,20,21] Crossover trial [19] Baseline data collections and wearable sensors are always required Vision-based Nonsystematic review [22][23][24][25] Systematic review [26] Research [27][28][29][30][31] Technical validation [27][28][29][30][31] Persisting challenges in image segmentation, black-box algorithms do not easily translate to training strategies Motion-based Nonsystematic review [32] Research [33][34][35][36] Validation of measurements [33] Validation of assessment [34][35][36] Augmenting robot sensing capabilities…”
Section: Motion-related Sensingmentioning
confidence: 99%
“…surgeon (or superior) remains distant, significant technical progress continues to be made, piecewise, in allowing quantitative, vision-based feedback for guiding and informing robotic surgery. Video-based methods have been proposed for a variety of relevant objectives [22], including characterization of tool articulation and kinematics [27,28], phase and step recognition in surgical procedures [29], classification of action, gestures, and tasks [26], and assessment of surgical skill [23,30,31]. Notably, at the heart of state-of-the-art approaches to surgical video analysis is deep learning (DL), a subfield of machine learning involving models that can automatically learn multiple layers of data representation to capture increasingly complex patterns in a hierarchical fashion [24].…”
Section: Vision-basedmentioning
confidence: 99%