IMPORTANCE Despite advances in the assessment of technical skills in surgery, a clear understanding of the composites of technical expertise is lacking. Surgical simulation allows for the quantitation of psychomotor skills, generating data sets that can be analyzed using machine learning algorithms. OBJECTIVE To identify surgical and operative factors selected by a machine learning algorithm to accurately classify participants by level of expertise in a virtual reality surgical procedure. DESIGN, SETTING, AND PARTICIPANTS Fifty participants from a single university were recruited between March 1, 2015, and May 31, 2016, to participate in a case series study at McGill University Neurosurgical Simulation and Artificial Intelligence Learning Centre. Data were collected at a single time point and no follow-up data were collected. Individuals were classified a priori as expert (neurosurgery staff), seniors (neurosurgical fellows and senior residents), juniors (neurosurgical junior residents), and medical students, all of whom participated in 250 simulated tumor resections. EXPOSURES All individuals participated in a virtual reality neurosurgical tumor resection scenario. Each scenario was repeated 5 times. MAIN OUTCOMES AND MEASURES Through an iterative process, performance metrics associated with instrument movement and force, resection of tissues, and bleeding generated from the raw simulator data output were selected by K-nearest neighbor, naive Bayes, discriminant analysis, and support vector machine algorithms to most accurately determine group membership. RESULTS A total of 50 individuals (9 women and 41 men; mean [SD] age, 33.6 [9.5] years; 14 neurosurgeons, 4 fellows, 10 senior residents, 10 junior residents, and 12 medical students) participated. Neurosurgeons were in practice between 1 and 25 years, with 9 (64%) involving a predominantly cranial practice. The K-nearest neighbor algorithm had an accuracy of 90% (45 of 50), the naive Bayes algorithm had an accuracy of 84% (42 of 50), the discriminant analysis algorithm had an accuracy of 78% (39 of 50), and the support vector machine algorithm had an accuracy of 76% (38 of 50). The K-nearest neighbor algorithm used 6 performance metrics to classify participants, the naive Bayes algorithm used 9 performance metrics, the discriminant analysis algorithm used 8 performance metrics, and the support vector machine algorithm used 8 performance metrics. Two neurosurgeons, 1 fellow or senior resident, 1 junior resident, and 1 medical student were misclassified. CONCLUSIONS AND RELEVANCE In a virtual reality neurosurgical tumor resection study, a machine learning algorithm successfully classified participants into 4 levels of expertise with 90% accuracy.
Simulation-based training is increasingly being used for assessment and training of psychomotor skills involved in medicine. The application of artificial intelligence and machine learning technologies has provided new methodologies to utilize large amounts of data for educational purposes. A significant criticism of the use of artificial intelligence in education has been a lack of transparency in the algorithms' decision-making processes. This study aims to 1) introduce a new framework using explainable artificial intelligence for simulationbased training in surgery, and 2) validate the framework by creating the Virtual Operative Assistant, an automated educational feedback platform. Twenty-eight skilled participants (14 staff neurosurgeons, 4 fellows, 10 PGY 4-6 residents) and 22 novice participants (10 PGY 1-3 residents, 12 medical students) took part in this study. Participants performed a virtual reality subpial brain tumor resection task on the NeuroVR simulator using a simulated ultrasonic aspirator and bipolar. Metrics of performance were developed, and leave-one-out cross validation was employed to train and validate a support vector machine in Matlab. The classifier was combined with a unique educational system to build the Virtual Operative Assistant which provides users with automated feedback on their metric performance with regards to expert proficiency performance benchmarks. The Virtual Operative Assistant successfully classified skilled and novice participants using 4 metrics with an accuracy, specificity and sensitivity of 92, 82 and 100%, respectively. A 2-step feedback system was developed to provide participants with an immediate visual representation of their standing related to expert proficiency performance benchmarks. The educational system outlined establishes a basis for the potential role of integrating artificial intelligence and virtual reality simulation into surgical educational teaching. The potential of linking expertise classification, objective feedback based on proficiency benchmarks, and instructor input creates a novel educational tool by integrating these three components into a formative educational paradigm.
Our pilot study demonstrates that the safety, quality, and efficiency of novice and expert operators can be measured using metrics derived from the NeuroTouch platform, helping to understand how specific operator performance is dependent on both psychomotor ability and cognitive input during multiple virtual reality brain tumor resections.
BACKGROUND Virtual reality surgical simulators provide a safe environment for trainees to practice specific surgical scenarios and allow for self-guided learning. Artificial intelligence technology, including artificial neural networks, offers the potential to manipulate large datasets from simulators to gain insight into the importance of specific performance metrics during simulated operative tasks. OBJECTIVE To distinguish performance in a virtual reality-simulated anterior cervical discectomy scenario, uncover novel performance metrics, and gain insight into the relative importance of each metric using artificial neural networks. METHODS Twenty-one participants performed a simulated anterior cervical discectomy on the novel virtual reality Sim-Ortho simulator. Participants were divided into 3 groups, including 9 post-resident, 5 senior, and 7 junior participants. This study focused on the discectomy portion of the task. Data were recorded and manipulated to calculate metrics of performance for each participant. Neural networks were trained and tested and the relative importance of each metric was calculated. RESULTS A total of 369 metrics spanning 4 categories (safety, efficiency, motion, and cognition) were generated. An artificial neural network was trained on 16 selected metrics and tested, achieving a training accuracy of 100% and a testing accuracy of 83.3%. Network analysis identified safety metrics, including the number of contacts on spinal dura, as highly important. CONCLUSION Artificial neural networks classified 3 groups of participants based on expertise allowing insight into the relative importance of specific metrics of performance. This novel methodology aids in the understanding of which components of surgical performance predominantly contribute to expertise.
The NeuroTouch platform incorporating the simulated scenarios and metrics used differentiates novice from expert neurosurgical performance, demonstrating NeuroTouch face, content, and construct validity and the possibility of developing brain tumor resection proficiency performance benchmarks.
The purpose of this form is to provide readers of your manuscript with information about your other interests that could influence how they receive and understand your work. The form is designed to be completed electronically and stored electronically. It contains programming that allows appropriate data display. Each author should submit a separate form and is responsible for the accuracy and completeness of the submitted information. The form is in six parts. Identifying information. The work under consideration for publication. This section asks for information about the work that you have submitted for publication. The time frame for this reporting is that of the work itself, from the initial conception and planning to the present. The requested information is about resources that you received, either directly or indirectly (via your institution), to enable you to complete the work. Checking "No" means that you did the work without receiving any financial support from any third party-that is, the work was supported by funds from the same institution that pays your salary and that institution did not receive third-party funds with which to pay you. If you or your institution received funds from a third party to support the work, such as a government granting agency, charitable foundation or commercial sponsor, check "Yes". Relevant financial activities outside the submitted work. This section asks about your financial relationships with entities in the bio-medical arena that could be perceived to influence, or that give the appearance of potentially influencing, what you wrote in the submitted work. You should disclose interactions with ANY entity that could be considered broadly relevant to the work. For example, if your article is about testing an epidermal growth factor receptor (EGFR) antagonist in lung cancer, you should report all associations with entities pursuing diagnostic or therapeutic strategies in cancer in general, not just in the area of EGFR or lung cancer. Report all sources of revenue paid (or promised to be paid) directly to you or your institution on your behalf over the 36 months prior to submission of the work. This should include all monies from sources with relevance to the submitted work, not just monies from the entity that sponsored the research. Please note that your interactions with the work's sponsor that are outside the submitted work should also be listed here. If there is any question, it is usually better to disclose a relationship than not to do so. For grants you have received for work outside the submitted work, you should disclose support ONLY from entities that could be perceived to be affected financially by the published work, such as drug companies, or foundations supported by entities that could be perceived to have a financial stake in the outcome. Public funding sources, such as government agencies, charitable foundations or academic institutions, need not be disclosed. For example, if a government agency sponsored a study in which you have been involved and drugs...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.