Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic—acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors—or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU–ISI Gesture and Skill Assessment Working Set dataset—co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy—averaged over five cross-validation trials—for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features—such as image-based orientation and image-based collision detection—or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.
The practice of Robot-Assisted Minimally Invasive Surgery (RAMIS) requires extensive skills from the human surgeons due to the special input device control, such as moving the surgical instruments, use of buttons, knobs, foot pedals and so. The global popularity of RAMIS created the need to objectively assess surgical skills, not just for quality assurance reasons, but for training feedback as well. Nowadays, there is still no routine surgical skill assessment happening during RAMIS training and education in the clinical practice. In this paper, a review of the manual and automated RAMIS skill assessment techniques is provided, focusing on their general applicability, robustness and clinical relevance.
Automation of surgical processes (SPs) is an utterly complex, yet highly demanded feature by medical experts. Currently, surgical tools with advanced sensory and diagnostic capabilities are only available. A major criticism towards the newly developed instruments that they are not fitting into the existing medical workflow often creating more annoyance than benefit for the surgeon. The first step in achieving streamlined integration of computer technologies is gaining a better understanding of the SP. Surgical ontologies provide a generic platform for describing elements of the surgical procedures. Surgical Process Models (SPMs) built on top of these ontologies have the potential to accurately represent the surgical workflow. SPMs provide the opportunity to use ontological terms as the basis of automation, allowing the developed algorithm to easily integrate into the surgical workflow, and to apply the automated SPMs wherever the linked ontological term appears in the workflow. In this work, as an example to this concept, the subtask level ontological term “blunt dissection” was targeted for automation. We implemented a computer vision-driven approach to demonstrate that automation on this task level is feasible. The algorithm was tested on an experimental silicone phantom as well as in several ex vivo environments. The implementation used the da Vinci surgical robot, controlled via the Da Vinci Research Kit (DVRK), relying on a shared code-base among the DVRK institutions. It is believed that developing and linking further building blocks of lower level surgical subtasks could lead to the introduction of automated soft tissue surgery. In the future, the building blocks could be individually unit tested, leading to incremental automation of the domain. This framework could potentially standardize surgical performance, eventually improving patient outcomes.
BACKGROUND: Sensor technologies and data collection practices are changing and improving quality metrics across various domains. Surgical skill assessment in Robot-Assisted Minimally Invasive Surgery (RAMIS) is essential for training and quality assurance. The mental workload on the surgeon (such as time criticality, task complexity, distractions) and non-technical surgical skills (including situational awareness, decision making, stress resilience, communication, leadership) may directly influence the clinical outcome of the surgery. METHODS: A literature search in PubMed, Scopus and PsycNet databases was conducted for relevant scientific publications. The standard PRISMA method was followed to filter the search results, including non-technical skill assessment and mental/cognitive load and workload estimation in RAMIS. Publications related to traditional manual Minimally Invasive Surgery were excluded, and also the usability studies on the surgical tools were not assessed. RESULTS: 50 relevant publications were identified for non-technical skill assessment and mental load and workload estimation in the domain of RAMIS. The identified assessment techniques ranged from self-rating questionnaires and expert ratings to autonomous techniques, citing their most important benefits and disadvantages. CONCLUSIONS: Despite the systematic research, only a limited number of articles was found, indicating that non-technical skill and mental load assessment in RAMIS is not a well-studied area. Workload assessment and soft skill measurement do not constitute part of the regular clinical training and practice yet. Meanwhile, the importance of the research domain is clear based on the publicly available surgical error statistics. Questionnaires and expert-rating techniques are widely employed in traditional surgical skill assessment; nevertheless, recent technological development in sensors and Internet of Things-type devices show that skill assessment approaches in RAMIS can be much more profound employing automated solutions. Measurements and especially big data type analysis may introduce more objectivity and transparency to this critical domain as well. SIGNIFICANCE: Non-technical skill assessment and mental load evaluation in Robot-Assisted Minimally Invasive Surgery is not a well-studied area yet; while the importance of this domain from the clinical outcome’s point of view is clearly indicated by the available surgical error statistics.
Medical imaging introduced the greatest paradigm change in the history of modern medicine, and particularly ultrasound (US) is becoming the most widespread imaging modality. The integration of digital imaging into the surgical domain opens new frontiers in diagnostics and intervention, and the combination of robotics leads to improved accuracy and targeting capabilities. This paper reviews the state-of-the-art in US-based robotic platforms, identifying the main research and clinical trends, reviewing current capabilities and limitations. The focus of the study includes non-autonomous US-based systems, US-based automated robotic navigation systems and USguided autonomous tools. These areas outline future development, projecting a swarm of new applications in the computer-assisted surgical domain.
No abstract
Minimally Invasive Surgery (MIS)-which is a very beneficial technique to the patient but can be challenging to the surgeon-includes endoscopic camera handling by an assistant (traditional MIS) or a robotic arm under the control of the operator (Robot-Assisted MIS, RAMIS). Since in the case of RAMIS the endoscopic image is the sole sensory input, it is essential to keep the surgical tools in the field-of-view of the camera for patient safety reasons. Based on the endoscopic images, the movement of the endoscope holder arm can be automated by visual servoing techniques, which can reduce the risk of medical error. In this paper, we propose a marker-based visual servoing technique for automated camera positioning in the case of RAMIS. The method was validated on the research-enhanced da Vinci Surgical System.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.