Introduction
The development of reporting guidelines over the past 20 years represents a major advance in scholarly publishing with recent evidence showing positive impacts. Whilst over 350 reporting guidelines exist, there are few that are specific to surgery. Here we describe the development of the STROCSS guideline (Strengthening the Reporting of Cohort Studies in Surgery).
Methods and analysis
We published our protocol apriori. Current guidelines for case series (PROCESS), cohort studies (STROBE) and randomised controlled trials (CONSORT) were analysed to compile a list of items which were used as baseline material for developing a suitable checklist for surgical cohort guidelines. These were then put forward in a Delphi consensus exercise to an expert panel of 74 surgeons and academics via Google Forms.
Results
The Delphi exercise was completed by 62% (46/74) of the participants. All the items were passed in a single round to create a STROCSS guideline consisting of 17 items.
Conclusion
We present the STROCSS guideline for surgical cohort, cross-sectional and case-control studies consisting of a 17-item checklist. We hope its use will increase the transparency and reporting quality of such studies. This guideline is also suitable for cross-sectional and case control studies. We encourage authors, reviewers, journal editors and publishers to adopt these guidelines.
To evaluate all simulation models for ophthalmology technical and non-technical skills training and the strength of evidence to support their validity and effectiveness. A systematic search was performed using PubMed and Embase for studies published from inception to 01/07/2019. Studies were analysed according to the training modality: virtual reality; wet-lab; dry-lab models; e-learning. The educational impact of studies was evaluated using Messick’s validity framework and McGaghie’s model of translational outcomes for evaluating effectiveness. One hundred and thirty-one studies were included in this review, with 93 different simulators described. Fifty-three studies were based on virtual reality tools; 47 on wet-lab models; 26 on dry-lab models; 5 on e-learning. Only two studies provided evidence for all five sources of validity assessment. Models with the strongest validity evidence were the Eyesi Surgical, Eyesi Direct Ophthalmoscope and Eye Surgical Skills Assessment Test. Effectiveness ratings for simulator models were mostly limited to level 2 (contained effects) with the exception of the Sophocle vitreoretinal surgery simulator, which was shown at level 3 (downstream effects), and the Eyesi at level 5 (target effects) for cataract surgery. A wide range of models have been described but only the Eyesi has undergone comprehensive investigation. The main weakness is in the poor quality of study design, with a predominance of descriptive reports showing limited validity evidence and few studies investigating the effects of simulation training on patient outcomes. More robust research is needed to enable effective implementation of simulation tools into current training curriculums.
Construct, face, and content validity was established for the RobotiX Mentor, and feasibility and acceptability of incorporation into surgical training were ascertained. The RobotiX Mentor shows potential as a valuable tool for training and assessment of trainees in robotic skills. Investigation of concurrent and predictive validity is necessary to complete validation, and evaluation of learning curves would provide insight into its value for training.
Simulation has become widely accepted as a supplementary method of training. Within urology, the greatest number of procedure-specific models and subsequent validation studies have been carried out in the field of endourology. Many generic-skills simulators have been created for laparoscopic and robot-assisted surgery, but only a limited number of procedure-specific models are available. By contrast, open urological simulation has only seen a handful of validated models. Of the available modalities, virtual reality (VR) simulators are most commonly used for endourology and robotic surgery training, the former also employing many high-fidelity bench models. Smaller dry-lab and ex vivo animal models have been used for laparoscopic and robotic training, whereas live animals and human cadavers are widely used for full procedural training. Newer concepts such as augmented-reality (AR) models and patient-specific simulators have also been introduced. Several curricula, including one recommended within, have been produced, incorporating various different training modalities and nontechnical skills training techniques. Such curricula and validated models should be used in a structured fashion to supplement operating room training.
scite is a Brooklyn-based startup that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.