2023
DOI: 10.1097/as9.0000000000000292
|View full text |Cite
|
Sign up to set email alerts
|

Developing Surgical Skill Level Classification Model Using Visual Metrics and a Gradient Boosting Algorithm

Abstract: Objective: Assessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient-boosting classification model to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics. Methods: Eye gaze data were recorded from 11 participants performing 4 subtasks; blunt dissection, retraction, cold dissec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 39 publications
0
6
0
Order By: Relevance
“…Most often, motion data from surgical videos or kinematic data from robotic systems or sensors were collected from simulators rather than during actual surgical procedures. The most common simulators used were robotic box models ( n = 27, 54%) [ 13 , 14 , 16 – 20 , 22 , 23 , 25 , 29 , 32 , 37 , 43 , 45 – 48 , 50 , 53 56 , 58 61 ]. Laparoscopic simulators were the second most common setting for data collection ( n = 15, 30%) [ 12 , 21 , 24 , 26 , 27 , 30 , 31 , 35 , 36 , 40 , 42 , 49 , 51 , 54 , 57 ].…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Most often, motion data from surgical videos or kinematic data from robotic systems or sensors were collected from simulators rather than during actual surgical procedures. The most common simulators used were robotic box models ( n = 27, 54%) [ 13 , 14 , 16 – 20 , 22 , 23 , 25 , 29 , 32 , 37 , 43 , 45 – 48 , 50 , 53 56 , 58 61 ]. Laparoscopic simulators were the second most common setting for data collection ( n = 15, 30%) [ 12 , 21 , 24 , 26 , 27 , 30 , 31 , 35 , 36 , 40 , 42 , 49 , 51 , 54 , 57 ].…”
Section: Resultsmentioning
confidence: 99%
“…Four different types of input data were used throughout the 50 studies: video data ( n = 25, 50%) [ 12 , 13 , 15 , 21 , 24 , 25 , 27 , 28 , 31 34 , 36 , 38 , 39 , 41 – 45 , 50 52 , 55 , 58 ], kinematic data ( n = 22, 44%) [ 14 , 16 – 20 , 22 , 23 , 29 , 35 , 37 , 40 , 46 49 , 54 , 56 , 57 , 59 61 ], eye tracking data ( n = 2) [ 36 , 53 ], and functional near-infrared spectroscopy (fNIRS) data ( n = 2) [ 26 , 30 ]. Video recordings either from laparoscopic/robotic cameras or external cameras are used in 25 studies (50%).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Preprocessing involved applying a moving average filter with a window size of 3 points to reduce noise, and a velocity-threshold identification fixation filter with a threshold of 30° per second to identify fixation and saccadic points. Twelve eye gaze features were extracted, including pupil diameter, entropy, fixation time points, saccade time points, gaze direction change, and pupil trajectory length for both eyes [ 17 , 18 ].…”
Section: Methodsmentioning
confidence: 99%