Selective outcome reporting is a form of bias resulting from discrepancies between outcomes presented in a trial's registration and the published report. We investigate this selective bias in obesity clinical trials. A PubMed search was conducted to identify randomized controlled trials (RCTs) published in four obesity journals from 2013 to 2015. Primary, secondary and tertiary outcomes were recorded for each trial and compared to pre-specified outcomes in each trial's registration. Of the 392 identified articles, 142 were included in the final analysis; 22 (15%) RCTs demonstrated major outcome discrepancies between registration and publication: No primary outcomes were demoted to a secondary or tertiary outcome; 14 (36.84%) primary outcomes were omitted; 14 (36.84%) primary outcomes were added: 5 (13.16%) secondary outcomes were upgraded to primary outcomes; and timing of assessment for a primary outcome changed 5 (13.16%) times. Out of the 63 prospectively registered studies, 53 had no discrepancies. A total of 76 of the studies (29.80%) were unregistered or did not have an associated registration number. Our results suggest that selective outcome reporting may be a concern in obesity clinical trials. As selective outcome reporting may distort clinical findings and limit outcomes in systematic reviews, we encourage trialists and journal editors to work towards solutions to mitigate this issue.
IntroductionWith efforts to combat opioid use disorder, there is an increased interest in clinical practice guidelines (CPGs) for opioid use disorder treatments. No literature exists examining the quality of systematic reviews used in opioid use disorder CPGs. This study aims to describe the methodological quality and reporting clarity of systematic reviews (SRs) used to create CPGs for opioid use disorder.MethodsFrom June to July 2016 guideline clearinghouses and medical literature databases were searched for relevant CPGs used in the treatment of opioid use disorder. Included CPGs must have been recognized by a national organization. SRs from the reference section of each CPG was scored by using AMSTAR (a measurement tool to assess the methodological quality of systematic reviews) tool and PRISMA (preferred reporting items for systematic reviews and meta-analyses) checklist.ResultsSeventeen CPGs from 2006–2016 were included in the review. From these, 57 unique SRs were extracted. SRS comprised 0.28% to 17.92% of all references found in the CPGs. All SRs obtained moderate or high methodological quality score on the AMSTAR tool. All reviews met at least 70% of PRISMA criteria. In PRISMA, underperforming areas included accurate title labeling, protocol registration, and risk of bias. Underperforming areas in AMSTAR included conflicts of interest, funding, and publication bias. A positive correlation was found between AMSTAR and PRISMA scores (r = .79).ConclusionAlthough the SRs in the CPGs were of good quality, there are still areas for improvement. Systematic reviewers should consult PRISMA and AMSTAR when conducting and reporting reviews. It is important for CPG developers to consider methodological quality as a factor when developing CPG recommendations, recognizing that the quality of systematic reviews underpinning guidelines does not necessarily correspond to the quality of the guideline itself.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.