2016
DOI: 10.1016/j.juro.2016.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Crowd-Sourced Assessment of Technical Skills for Validation of Basic Laparoscopic Urologic Skills Tasks

Abstract: The concordance of crowdsourcing with faculty panels and speed of reviews is sufficiently high to merit its further investigation alongside automated motion metrics. The overall agreement among faculty, motion metrics and crowdworkers provides evidence in support of the construct validity for 2 of the 4 BLUS tasks.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
27
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 47 publications
(29 citation statements)
references
References 19 publications
0
27
0
Order By: Relevance
“…This study used the Basic Laparascopic Urologic Study (BLUS) dataset, described in detail in [1]. This dataset arose from a gap in the field, in which no educational surgical certification process existed for urologic surgery, as opposed to how the Fundamentals of Laparosocopic Surgery (FLS) exists for general surgical procedures [19], [20], [21].…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…This study used the Basic Laparascopic Urologic Study (BLUS) dataset, described in detail in [1]. This dataset arose from a gap in the field, in which no educational surgical certification process existed for urologic surgery, as opposed to how the Fundamentals of Laparosocopic Surgery (FLS) exists for general surgical procedures [19], [20], [21].…”
Section: Datasetmentioning
confidence: 99%
“…
Purpose: Finding effective methods of discriminating surgeon technical skill has proven a complex problem to solve computationally. Previous research has shown that obtaining non-expert crowd evaluations of surgical performances is as accurate as the gold standard, expert surgeon review [1]. The aim of this research is to learn whether crowdsourced evaluators give higher ratings of technical skill to video of performances with increased playback speed, its effect in discriminating skill levels, and whether this increase is related to the evaluator consciously being aware that the video is being manually manipulated.
…”
mentioning
confidence: 99%
“…[3][4][5] Recently, evaluation of video footage by laypeople trained in the use of validated assessment metrics, termed "crowdsourcing," has been shown to provide efficient and reliable feedback that correlates well with expert ratings. 6,7 This methodology has been translated to differentiating surgeon skill 8 in robotic prostatectomy. In order to improve care provided to patients, one aspect of surgical quality improvement seeks to identify both intraoperative steps that contribute to clinically relevant outcomes.…”
Section: Introductionmentioning
confidence: 99%
“…The authors compared assessments by expert raters (robotic and open surgeons) and those from lay people from the Crowd-Sourced Assessment of Technical Skills (C-SATS) 1,2 group with regard to experienced surgeons' technical ability during a robotic-assisted radical cystectomy. The authors prepared short video segments (60 seconds) showing mobilization of the ureter, ureteral preparation for the anastomosis, and the ureteral-ileal anastomosis from nine (out of a potential 102) cases that resulted in clinically significant postoperative uretero-ileal strictures (UIS) (10 strictures in total).…”
mentioning
confidence: 99%