Background: To improve reporting transparency and research integrity, some journals have begun publishing study protocols and statistical analysis plans alongside trial publications. We sought to assess the overall availability and characteristics of protocols and statistical analysis plans of randomized clinical trials published in the top five (by impact factor) general medicine journals. Methods: All randomized clinical trials published in Annals of Internal Medicine, BMJ, JAMA, Lancet, and NEJM in 2016 were identified. For each randomized clinical trial, we searched for protocols and statistical analysis plans on journal websites (including supplementary material) and in the article, for example, a referenced publication or link to trial or institutional website. Characteristics of randomized clinical trials were extracted from the publication and clinical trial registry. A detailed assessment of protocols and statistical analysis plans was conducted in a 20% random sample of randomized clinical trials. Results: Protocols were available for 299 (82%) trials, ranging from 50% in BMJ to >95% in NEJM and JAMA. Statistical analysis plans were available for 182 (50%) trials and varied from <10% for Annals of Internal Medicine, BMJ, and Lancet to 92% for NEJM. Of the 76 randomized clinical trials in the 20% random sample, 63 (83%) had a protocol but less than half (31; 44%) included an a priori (dated prior to patient enrollment) version of the protocol. Statistical analysis plans were available for 35 (46%) trials, and only 5 (7%) included an a priori version. Conclusion: Protocols and statistical analysis plans are publicly available for the majority of trials. However, the a priori versions of these documents are only available for a minority of trials. More attention must be paid to ensuring the public availability of a priori versions.
PurposeTrustworthy reporting of quadrivalent human papillomavirus (HPV) vaccine trials is the foundation for assessing the vaccine’s risks and benefits. However, several pivotal trial publications incompletely reported important methodological details and inaccurately described the formulation that the control arms received. Under the Restoring Invisible and Abandoned Trials initiative (RIAT), we aim to restore the public record regarding the content and rationale of the controls used in the trials.MethodsWe assembled a cohort (five randomised controlled trials) described as placebo-controlled using clinical study reports (CSRs) obtained from the European Medicines Agency. We extracted the content and rationale for the choice of control used in each trial across six data sources: trial publications, register records, CSR synopses, CSR main bodies, protocols and informed consent forms.ResultsAcross data sources, the control was inconsistently reported as ‘placebo’-containing aluminium adjuvant (sometimes with dose information). Amorphous aluminium hydroxyphosphate sulfate (AAHS) was not mentioned in any trial registry entry, but was mentioned in all publications and CSRs. In three of five trials, consent forms described the control as an ‘inactive’ substance. No rationale for the selection of the control was reported in any trial publication, register, consent form, CSR synopsis or protocol. Three trials reported the rationale for choice of control in CSRs: to preserve blinding and assess the safety of HPV virus-like particles as the ‘safety profile of (AAHS) is well characterised’.ConclusionsThe stated rationale of using AAHS control—to characterise the safety of the HPV virus-like particles—lacks clinical relevance. A non-placebo control may have obscured an accurate assessment of safety and the participant consent process of some trials raises ethical concerns.Trial registration numbersNCT00092482, NCT00092521, NCT00092534, NCT00090220, NCT00090285.
Objectives To synthesise research investigating data and code sharing in medicine and health to establish an accurate representation of the prevalence of sharing, how this frequency has changed over time, and what factors influence availability. Design Systematic review with meta-analysis of individual participant data. Data sources Ovid Medline, Ovid Embase, and the preprint servers medRxiv, bioRxiv, and MetaArXiv were searched from inception to 1 July 2021. Forward citation searches were also performed on 30 August 2022. Review methods Meta-research studies that investigated data or code sharing across a sample of scientific articles presenting original medical and health research were identified. Two authors screened records, assessed the risk of bias, and extracted summary data from study reports when individual participant data could not be retrieved. Key outcomes of interest were the prevalence of statements that declared that data or code were publicly or privately available (declared availability) and the success rates of retrieving these products (actual availability). The associations between data and code availability and several factors (eg, journal policy, type of data, trial design, and human participants) were also examined. A two stage approach to meta-analysis of individual participant data was performed, with proportions and risk ratios pooled with the Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis. Results The review included 105 meta-research studies examining 2 121 580 articles across 31 specialties. Eligible studies examined a median of 195 primary articles (interquartile range 113-475), with a median publication year of 2015 (interquartile range 2012-2018). Only eight studies (8%) were classified as having a low risk of bias. Meta-analyses showed a prevalence of declared and actual public data availability of 8% (95% confidence interval 5% to 11%) and 2% (1% to 3%), respectively, between 2016 and 2021. For public code sharing, both the prevalence of declared and actual availability were estimated to be <0.5% since 2016. Meta-regressions indicated that only declared public data sharing prevalence estimates have increased over time. Compliance with mandatory data sharing policies ranged from 0% to 100% across journals and varied by type of data. In contrast, success in privately obtaining data and code from authors historically ranged between 0% and 37% and 0% and 23%, respectively. Conclusions The review found that public code sharing was persistently low across medical research. Declarations of data sharing were also low, increasing over time, but did not always correspond to actual sharing of data. The effectiveness of mandatory data sharing policies varied substantially by journal and type of data, a finding that might be informative for policy makers when designing policies and allocating resources to audit compliance. Systematic review registration Open Science Framework doi: 10.17605/OSF.IO/7SX8U .
Numerous studies have demonstrated low but increasing rates of data and code sharing within medical and health research disciplines. However, it remains unclear how commonly data and code are shared across all fields of medical and health research, as well as whether sharing rates are positively associated with implementation of progressive policies by publishers and funders, or growing expectations from the medical and health research community at large. Therefore this systematic review aims to synthesise the findings of medical and health science studies that have empirically investigated the prevalence of data or code sharing, or both. Objectives include the investigation of: (i) the prevalence of public sharing of research data and code alongside published articles (including preprints), (ii) the prevalence of private sharing of research data and code in response to reasonable requests, and (iii) factors associated with the sharing of either research output (e.g., the year published, the publisher’s policy on sharing, the presence of a data or code availability statement). It is hoped that the results will provide some insight into how often research data and code are shared publicly and privately, how this has changed over time, and how effective some measures such as the institution of data sharing policies and data availability statements have been in motivating researchers to share their underlying data and code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.