We have identified ‘spin’ in abstracts of randomised controlled trials (RCTs) with nonsignificant primary endpoints in psychiatry and psychology journals. This is a cross-sectional review of clinical trials with nonsignificant primary endpoints published in psychiatry and psychology journals from January 2012 to December 2017. The main outcome was the frequency and manifestation of spin in the abstracts. We define spin as the ‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results’. We have also assessed the relationship between industry funding and spin. Of the 486 RCTs examined, 116 were included in our analysis of spin. Spin was identified in 56% (n=65) of those included. Spin was found in 2 (2%) titles, 24 (21%) abstract results sections and 57 (49.1%) abstract conclusion sections. Evidence of spin was simultaneously identified in both results and conclusions sections in 15% of RCTs (n=17). Twelve articles reported industry funding (10%). Industry funding was not associated with increased odds of spin in the abstract (unadjusted OR: 1.0; 95% CI: 0.3 to 3.2). We found no relationship between industry funding and spin in abstracts. These findings raise concerns about the effects spin may have on clinicians. Further steps could be taken to address spin, including inviting reviewers to comment on the presence of spin and updating Consolidated Standards of Reporting Trials guidelines to contain language discouraging spin.
Cardiac and cardiovascular system journals infrequently require, recommend or enforce the use of reporting guidelines. Furthermore, too few require or enforce the use of clinical trial registration. Cardiology journal editors should consider guideline adoption due to their potential to limit bias and increase transparency.
BackgroundClinical practice guidelines contain recommendations for physicians to determine the most appropriate care for patients. These guidelines systematically combine scientific evidence and clinical judgment, culminating in recommendations intended to optimize patient care. The recommendations in CPGs are supported by evidence which varies in quality. We aim to survey the clinical practice guidelines created by the American College of Gastroenterology, report the level of evidence supporting their recommendations, and identify areas where evidence can be improved with additional research.MethodsWe extracted 1328 recommendations from 39 clinical practice guidelines published by the American College of Gastroenterology. Several of the clinical practice guidelines used the differing classifications of evidence for their recommendations. To standardize our results, we devised a uniform system for evidence.ResultsA total of 39 clinical practice guidelines were surveyed in our study. Together they account for 1328 recommendations. 693 (52.2%) of the recommendations were based on low evidence, indicating poor evidence or expert opinion. Among individual guidelines, 13/39 (33.3%) had no recommendations based on high evidence.ConclusionVery few recommendations made by the American College of Gastroenterology are supported by high levels of evidence. More than half of all recommendations made by the American College of Gastroenterology are based on low-quality evidence or expert opinion.
Take Home MessageMany components of transparency and reproducibility are lacking in urology publications, making study replication, at best, difficult.IntroductionReproducibility is essential for the integrity of scientific research. Reproducibility is measured by the ability of investigators to replicate the outcomes of an original publication by using the same materials and procedures.MethodsWe sampled 300 publications in the field of urology for assessment of multiple indicators of reproducibility, including material availability, raw data availability, analysis script availability, pre-registration information, links to protocols, and whether the publication was freely available to the public. Publications were also assessed for statements about conflicts of interest and funding sources.ResultsOf the 300 sample publications, 171 contained empirical data and could be analyzed for reproducibility. Of the analyzed articles, 0.58% (1/171) provided links to protocols, and none of the studies provided analysis scripts. Additionally, 95.91% (164/171) did not provide accessible raw data, 97.53% (158/162) did not provide accessible materials, and 95.32% (163/171) did not state they were pre-registered.ConclusionCurrent urology research does not consistently provide the components needed to reproduce original studies. Collaborative efforts from investigators and journal editors are needed to improve research quality, while minimizing waste and patient risk.
Publication bias can arise in systematic reviews when unpublished data are omitted and lead to inaccurate clinical decision making and adverse clinical outcomes. By conducting searches of clinical trial registries (CTRs), researchers can create more accurate systematic reviews and mitigate the risk of publication bias. The aims of this study are: to evaluate CTR use in systematic reviews and meta-analyses within the minimally invasive surgical oncology (MISO) literature; to conduct a search of ClinicalTrials.gov for a subset of reviews to determine if eligible trials exist that could have been used. This is a cross-sectional study of 197 systematic reviews and meta-analyses retrieved from PubMed. Of 137 included studies, 18 (13.1%) reported searching a CTR. Our ClinicalTrials.gov search revealed that of the 25 randomly selected systematic reviews that failed to conduct a trial registry search, 16 (64.0%) would have identified additional data sources. MISO systematic reviews and meta-analyses do not regularly use CTRs in their data collection, despite eligible trials being freely available.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.