Background Literature searches underlie the foundations of systematic reviews and related review types. Yet, the literature searching component of systematic reviews and related review types is often poorly reported. Guidance for literature search reporting has been diverse, and, in many cases, does not offer enough detail to authors who need more specific information about reporting search methods and information sources in a clear, reproducible way. This document presents the PRISMA-S (Preferred Reporting Items for Systematic reviews and Meta-Analyses literature search extension) checklist, and explanation and elaboration. Methods The checklist was developed using a 3-stage Delphi survey process, followed by a consensus conference and public review process. Results The final checklist includes 16 reporting items, each of which is detailed with exemplar reporting and rationale. Conclusions The intent of PRISMA-S is to complement the PRISMA Statement and its extensions by providing a checklist that could be used by interdisciplinary authors, editors, and peer reviewers to verify that each component of a search is completely reported and therefore reproducible.
U.S. Department of Veterans Affairs. (PROSPERO: CRD42016033623).
Background Result summaries are now required to be reported in ClinicalTrials.gov for many trials of drugs and devices. Purpose To evaluate the consistency of reporting in trials that are both registered in the ClinicalTrials.gov results database and published in the literature. Data Sources ClinicalTrials.gov results database, matched publications identified through both ClinicalTrials.gov and a manual search of two electronic databases. Study Selection 10% random sample of Phase III or IV trials with results in the ClinicalTrials.gov results database, completed before January 1, 2009, with two or more arms. Data Extraction One reviewer extracted data from ClinicalTrials.gov results database and matching publications. A subsample was independently verified. Basic design features and results were compared between reporting sources and discrepancies were summarized. Data Synthesis Of 110 reviewed trials with results, most were industry-sponsored, parallel design, drug studies. The most common inconsistency was the number of secondary outcome measures reported (80%). There were 16 trials (15%) that reported the primary outcome description inconsistently and 22 (20%) in which the primary outcome value was reported inconsistently. A total of 38 trials inconsistently reported the number of individuals with a serious adverse event (SAE), of which 33 (87%) reported more SAEs in ClinicalTrials.gov. Among the 84 trials that reported SAEs in ClinicalTrials.gov, 11 publications did not mention SAEs, 5 reported SAEs as zero or not occurring, and 21 reported a different number of SAEs. In 29 trials that reported deaths in ClinicalTrials.gov, 28% differed with the matched publication. Limitations Small sample that includes earliest results posted to the database and therefore may reflect inexperience with the submission process. Conclusions Reporting discrepancies between the ClinicalTrials.gov results database and matching publications are common. It is unclear which reporting source contains the most accurate account of trial results. ClinicalTrials.gov may provide a more comprehensive description of trial adverse events than the publication.
BackgroundRapid review (RR) products are inherently appealing as they are intended to be less time-consuming and resource-intensive than traditional systematic reviews (SRs); however, there is concern about the rigor of methods and reliability of results. In 2013 to 2014, a workgroup comprising representatives from the Agency for Healthcare Research and Quality’s Evidence-based Practice Center Program conducted a formal evaluation of RRs. This paper summarizes results, conclusions, and recommendations from published review articles examining RRs.MethodsA systematic literature search was conducted and publications were screened independently by two reviewers. Twelve review articles about RRs were identified. One investigator extracted data about RR methods and how they compared with standard SRs. A narrative summary is presented.ResultsA cross-comparison of review articles revealed the following: 1) ambiguous definitions of RRs, 2) varying timeframes to complete RRs ranging from 1 to 12 months, 3) limited scope of RR questions, and 4) significant heterogeneity between RR methods.ConclusionsRR definitions, methods, and applications vary substantially. Published review articles suggest that RRs should not be viewed as a substitute for a standard SR, although they have unique value for decision-makers. Recommendations for RR producers include transparency of methods used and the development of reporting standards.Electronic supplementary materialThe online version of this article (doi:10.1186/s13643-015-0040-4) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.