HighlightsDrinkers underestimated the number of drinks that constituted 14 units.Unit understanding was greater for novel unit labels compared to industry labels.Motivation to drink less was higher for cancer and negatively-framed messages.
Primary data collected during a research study is often shared and may be reused for new studies. To assess the extent of data sharing in favourable circumstances and whether data sharing checks can be automated, this article investigates summary statistics from primary human genome-wide association studies (GWAS). This type of data is highly suitable for sharing because it is a standard research output, is straightforward to use in future studies (e.g., for secondary analysis), and may be already stored in a standard format for internal sharing within multi-site research projects. Manual checks of 1799 articles from 2010 and 2017 matching a simple PubMed query for molecular epidemiology GWAS were used to identify 314 primary human GWAS papers. Of these, only 13% reported the location of a complete set of GWAS summary data, increasing from 3% in 2010 to 23% in 2017. Whilst information about whether data was shared was typically located clearly within a data availability statement, the exact nature of the shared data was usually unspecified. Thus, data sharing is the exception even in suitable research fields with relatively strong data sharing norms. Moreover, the lack of clear data descriptions within data sharing statements greatly complicates the task of automatically characterising shared data sets.
Background Two-sample Mendelian randomization (2SMR) is an increasingly popular epidemiological method that uses genetic variants as instruments for making causal inferences. Clear reporting of methods employed in such studies is important for evaluating their underlying quality. However, the quality of methodological reporting of 2SMR studies is currently unclear. We aimed to assess the reporting quality of studies that used MR-Base, one of the most popular platforms for implementing 2SMR analysis. Methods We created a bespoke reporting checklist to evaluate reporting quality of 2SMR studies. We then searched Web of Science Core Collection, PsycInfo, MEDLINE, EMBASE and Google Scholar citations of the MR-Base descriptor paper to identify published MR studies that used MR-Base for any component of the MR analysis. Study screening and data extraction were performed by at least two independent reviewers. Results In the primary analysis, 87 studies were included. Reporting quality was generally poor across studies, with a mean of 53% (SD = 14%) of items reported in each study. Many items required for evaluating the validity of key assumptions made in MR were poorly reported: only 44% of studies provided sufficient details for assessing if the genetic variant associates with the exposure (‘relevance’ assumption), 31% for assessing if there are any variant-outcome confounders (‘independence’ assumption), 89% for the assessing if the variant causes the outcome independently of the exposure (‘exclusion restriction’ assumption) and 32% for assumptions of falsification tests. We did not find evidence of a change in reporting quality over time or a difference in reporting quality between studies that used MR-Base and a random sample of MR studies that did not use this platform. Conclusions The quality of reporting of two-sample Mendelian randomization studies in our sample was generally poor. Journals and researchers should consider using the STROBE-MR guidelines to improve reporting quality.
Background: We studied a novel initiative – Registered Reports Funding Partnerships (RRFPs) – whereby research funders and journals partner in order to integrate their procedures for funding applications and Registered Reports submissions into one process. We investigated the feasibility of conducting a randomised controlled trial (RCT) of the impact of RRFPs on (1) research quality and (2) the efficiency of the research process, from funding to publication. Methods: We conducted 32 semi-structured interviews and follow-up questionnaires with stakeholders (funders, editors, authors, and reviewers) across six different RRFPs. Results: A RCT of RRFPs appears to be feasible in principle. The partnership concept seems worthwhile to pursue further and is adaptable to the needs of various funders and publishers, and across disciplines. Three primary outcomes of interest should be measurable, and participant randomisation could conceivably be done in a number of ways. In practice, however, the current volume of submissions going through existing partnerships is too low to support a full trial. Conclusions: Although a RCT of RRFPs is conceptually feasible, it will only be possible if organisations are willing to form new partnerships, scale up existing ones, and incorporate a trial (i.e., randomisation) into these partnerships.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.