We apply recurrent neural networks to the task of recognizing surgical activities from robot kinematics. Prior work in this area focuses on recognizing short, low-level activities, or gestures, and has been based on variants of hidden Markov models and conditional random fields. In contrast, we work on recognizing both gestures and longer, higher-level activites, or maneuvers, and we model the mapping from kinematics to gestures/maneuvers with recurrent neural networks. To our knowledge, we are the first to apply recurrent neural networks to this task. Using a single model and a single set of hyperparameters, we match state-of-the-art performance for gesture recognition and advance state-of-the-art performance for maneuver recognition, in terms of both accuracy and edit distance. Code is available at https://github.com/ rdipietro/miccai-2016-surgical-activity-rec.
Objective To develop a robotic surgery training regimen integrating objective skill assessment for otolaryngology and head and neck surgery trainees consisting of training modules of increasing complexity and leading up to procedure specific training. In particular, we investigate applications of such a training approach for surgical extirpation of oropharyngeal tumors via a transoral approach using the da Vinci Robotic system. Study Design Prospective blinded data collection and objective evaluation (OSATS) of three distinct phases using the da Vinci Robotic surgical system. Setting Academic University Medical Engineering/Computer Science laboratory Methods Between September 2010 and July 2011, 8 Otolaryngology Head and Neck Surgery residents and 4 staff “experts” from an academic hospital participated in three distinct phases of robotic surgery training involving 1) robotic platform operational skills, 2) set-up of the patient side system, and 3) a complete ex-vivo surgical extirpation of an oropharyngeal “tumor” located in the base of tongue. Trainees performed multiple (4) approximately equally spaced training sessions in each stage of the training. In addition to trainees, baseline performance data was obtained for the experts. Each surgical stage was documented with motion and event data captured from the application programming interfaces (API) of the da Vinci system, as well as separate video cameras as appropriate. All data was assessed using automated skill measures of task efficiency, and correlated with structured assessment (OSATS, and similar Likert scale) from three experts to assess expert and trainee differences, and compute automated and expert assessed learning curves. Results Our data shows that such training results in an improved didactic robotic knowledge base and improved clinical efficiency with respect to the set-up and console manipulation. Experts (e.g. average OSATS 25, Stdev. 3.1, module 1 – suturing) and trainees (average OSATS 15.9, Stdev. 3.9, week 1) are well separated at the beginning of the training, and the separation reduces significantly (expert average OSATS 27.6, Std. 2.7, trainee average OSATS 24.2, Std. 6.8, module 3) at the conclusion of the training. Learning curves in each of the three stages show diminishing differences between the experts and trainees, also consistent with expert assessment. Subjective assessment by experts verified the clinical utility of the module 3 surgical environment and a survey of trainees consistently rated the curriculum as very useful in progression to human operating room assistance. Conclusions Structured curricular robotic surgery training with objective assessment promises to reduce the overhead for mentors, allow detailed assessment of human-machine interface skills and create customized training models for individualized training. This preliminary study verifies the utility of such training in improving human-machine operations skills (module 1), and operating room and surgical skills (module 2 and 3). In contrast to cur...
Our framework implemented using crowdsourced pairwise comparisons leads to valid objective surgical skill assessment for segments within a task, and for the task overall. Crowdsourcing yields reliable pairwise comparisons of skill for segments within a task with high efficiency. Our framework may be deployed within surgical training programs for objective, automated, and standardized evaluation of technical skills.
BackgroundSurgical tasks are performed in a sequence of steps, and technical skill evaluation includes assessing task flow efficiency. Our objective was to describe differences in task flow for expert and novice surgeons for a basic surgical task.MethodsWe used a hierarchical semantic vocabulary to decompose and annotate maneuvers and gestures for 135 instances of a surgeon’s knot performed by 18 surgeons. We compared counts of maneuvers and gestures, and analyzed task flow by skill level.ResultsExperts used fewer gestures to perform the task (26.29; 95% CI = 25.21 to 27.38 for experts vs. 31.30; 95% CI = 29.05 to 33.55 for novices) and made fewer errors in gestures than novices (1.00; 95% CI = 0.61 to 1.39 vs. 2.84; 95% CI = 2.3 to 3.37). Transitions among maneuvers, and among gestures within each maneuver for expert trials were more predictable than novice trials.ConclusionsActivity segments and state flow transitions within a basic surgical task differ by surgical skill level, and can be used to provide targeted feedback to surgical trainees.
Background With increased use of robotic surgery in specialties including urology, development of training methods has also intensified. However, current approaches lack the ability of discriminating operational and surgical skills. Methods An automated recording system was used to longitudinally (monthly) acquire instrument motion/telemetry and video and for 4 basic surgical skills -- suturing, manipulation, transection, and dissection. Statistical models were then developed to discriminate the human-machine skill differences between practicing expert surgeons and trainees. Results Data from 6 trainee and 2 experts was analyzed to validate the first ever statistical models of operational skills, and demonstrate classification with very high accuracy (91.7% for masters, and 88.2% for camera motion) and sensitivity. Conclusions We report on our longitudinal study aimed at tracking robotic surgery trainees to proficiency, and methods capable of objectively assessing operational and technical skills that would be used in assessing trainee progress at the participating institutions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.