Background Systematic reviews (SRs) are often cited as the highest level of evidence available as they involve the identification and synthesis of published studies on a topic. Unfortunately, it is increasingly challenging for small teams to complete SR procedures in a reasonable time period, given the exponential rise in the volume of primary literature. Crowdsourcing has been postulated as a potential solution. Objective The feasibility objective of this study was to determine whether a crowd would be willing to perform and complete abstract and full text screening. The validation objective was to assess the quality of the crowd’s work, including retention of eligible citations (sensitivity) and work performed for the investigative team, defined as the percentage of citations excluded by the crowd. Methods We performed a prospective study evaluating crowdsourcing essential components of an SR, including abstract screening, document retrieval, and full text assessment. Using CrowdScreenSR citation screening software, 2323 articles from 6 SRs were available to an online crowd. Citations excluded by less than or equal to 75% of the crowd were moved forward for full text assessment. For the validation component, performance of the crowd was compared with citation review through the accepted, gold standard, trained expert approach. Results Of 312 potential crowd members, 117 (37.5%) commenced abstract screening and 71 (22.8%) completed the minimum requirement of 50 citation assessments. The majority of participants were undergraduate or medical students (192/312, 61.5%). The crowd screened 16,988 abstracts (median: 8 per citation; interquartile range [IQR] 7-8), and all citations achieved the minimum of 4 assessments after a median of 42 days (IQR 26-67). Crowd members retrieved 83.5% (774/927) of the articles that progressed to the full text phase. A total of 7604 full text assessments were completed (median: 7 per citation; IQR 3-11). Citations from all but 1 review achieved the minimum of 4 assessments after a median of 36 days (IQR 24-70), with 1 review remaining incomplete after 3 months. When complete crowd member agreement at both levels was required for exclusion, sensitivity was 100% (95% CI 97.9-100) and work performed was calculated at 68.3% (95% CI 66.4-70.1). Using the predefined alternative 75% exclusion threshold, sensitivity remained 100% and work performed increased to 72.9% (95% CI 71.0-74.6; P <.001). Finally, when a simple majority threshold was considered, sensitivity decreased marginally to 98.9% (95% CI 96.0-99.7; P =.25) and work performed increased substantially to 80.4% (95% CI 78.7-82.0; P <.001). Conclusions Crowdsourcing of citation screening for SRs is feasible and has reasonable sensitivity and specificity. By expediting the screening process, crowdsourcing could permit the i...
Children with chronic critical illness (CCI) are hypothesized to be a high-risk patient population with persistent multiple organ dysfunction and functional morbidities resulting in recurrent or prolonged critical care; however, it is unclear how CCI should be defined. The aim of this scoping review was to evaluate the existing literature for case definitions of pediatric CCI and case definitions of prolonged PICU admission and to explore the methodologies used to derive these definitions.DATA SOURCES: Four electronic databases (Ovid Medline, Embase, CINAHL, and Web of Science) from inception to March 3, 2021. STUDY SELECTION:We included studies that provided a specific case definition for CCI or prolonged PICU admission. Crowdsourcing was used to screen citations independently and in duplicate. A machine-learning algorithm was developed and validated using 6,284 citations assessed in duplicate by trained crowd reviewers. A hybrid of crowdsourcing and machine-learning methods was used to complete the remaining citation screening. DATA EXTRACTION:We extracted details of case definitions, study demographics, participant characteristics, and outcomes assessed.DATA SYNTHESIS: Sixty-seven studies were included. Twelve studies (18%) provided a definition for CCI that included concepts of PICU length of stay (n = 12), medical complexity or chronic conditions (n = 9), recurrent admissions (n = 9), technology dependence (n = 5), and uncertain prognosis (n = 1). Definitions were commonly referenced from another source (n = 6) or opinion-based (n = 5). The remaining 55 studies (82%) provided a definition for prolonged PICU admission, most frequently greater than or equal to 14 (n = 11) or greater than or equal to 28 days (n = 10). Most of these definitions were derived by investigator opinion (n = 24) or statistical method (n = 18). CONCLUSIONS: Pediatric CCI has been variably defined with regard to the concepts of patient complexity and chronicity of critical illness. A consensus definition is needed to advance this emerging and important area of pediatric critical care research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.