Purpose Surgical workflow recognition and context-aware systems could allow better decision making and surgical planning by providing the focused information, which may eventually enhance surgical outcomes. While current developments in computer-assisted surgical systems are mostly focused on recognizing surgical phases, they lack recognition of surgical workflow sequence and other contextual element, e.g., "Instruments." Our study proposes a hybrid approach, i.e., using deep learning and knowledge representation, to facilitate recognition of the surgical workflow. Methods We implemented "Deep-Onto" network, which is an ensemble of deep learning models and knowledge management tools, ontology and production rules. As a prototypical scenario, we chose robot-assisted partial nephrectomy (RAPN). We annotated RAPN videos with surgical entities, e.g., "Step" and so forth. We performed different experiments, including the inter-subject variability, to recognize surgical steps. The corresponding subsequent steps along with other surgical contexts, i.e., "Actions," "Phase" and "Instruments," were also recognized. Results The system was able to recognize 10 RAPN steps with the prevalence-weighted macro-average (PWMA) recall of 0.83, PWMA precision of 0.74, PWMA F1 score of 0.76, and the accuracy of 74.29% on 9 videos of RAPN. Conclusion We found that the combined use of deep learning and knowledge representation techniques is a promising approach for the multi-level recognition of RAPN surgical workflow.
Surgical workflow modeling is becoming increasingly useful to train surgical residents for complex surgical procedures. Rule-based surgical workflows have shown to be useful to create context-aware systems. However, manually constructing production rules is a time-intensive and laborious task. With the expansion of new technologies, large video archive can be created and annotated exploiting and storing the expert's knowledge. This paper presents a prototypical study of automatic generation of production rules, in the Horn-clause, using the First Order Inductive Learner (FOIL) algorithm applied to annotated surgical videos of Thoracentesis procedure and of its feasibility to use in context-aware system framework. The algorithm was able to learn 18 rules for surgical workflow model with 0.88 precision, and 0.94 F1 score on the standard video annotation data, representing entities of the surgical workflow, which was used to retrieve contextual information on Thoracentesis workflow for its application to surgical training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.