2022
DOI: 10.1371/journal.pone.0275962
|View full text |Cite
|
Sign up to set email alerts
|

Reporting quality in preclinical animal experimental research in 2009 and 2018: A nationwide systematic investigation

Abstract: Lack of translation and irreproducibility challenge preclinical animal research. Insufficient reporting methodologies to safeguard study quality is part of the reason. This nationwide study investigates the reporting prevalence of these methodologies and scrutinizes the reported information’s level of detail. Publications were from two time periods to convey any reporting progress and had at least one author affiliated to a Danish University. We retrieved all relevant animal experimental studies using a predef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 46 publications
0
1
0
Order By: Relevance
“…For the post hoc tests, we employed the Dunn-Bonferroni test to correct for alpha inflation in order to limit the potential of false-positive results from a statistical point of view. It is puzzling that conducting and reporting a sample size calculation at all is surprisingly uncommon; a recent systematic review found it to have only increased from 5.2% to 7.6% in 2018 [109], despite the methodological drawbacks of not doing so [110]. To have performed a sample size calculation does not, however, support the assumptions that we made, at least some level of transferability from dogs to pigs, but at least provides some support for the reliability of the results, given its assumptions were true, for which there are no data at present.…”
Section: Discussionmentioning
confidence: 99%
“…For the post hoc tests, we employed the Dunn-Bonferroni test to correct for alpha inflation in order to limit the potential of false-positive results from a statistical point of view. It is puzzling that conducting and reporting a sample size calculation at all is surprisingly uncommon; a recent systematic review found it to have only increased from 5.2% to 7.6% in 2018 [109], despite the methodological drawbacks of not doing so [110]. To have performed a sample size calculation does not, however, support the assumptions that we made, at least some level of transferability from dogs to pigs, but at least provides some support for the reliability of the results, given its assumptions were true, for which there are no data at present.…”
Section: Discussionmentioning
confidence: 99%
“…Similarly, before the implementation of requirements to report study design elements (e.g., randomization, blinding, sample size estimation) across three major journals in 2011, less than 33% of studies reported randomization, less than 47% reported blinding, and less than 6% reported sample size estimation. The number of articles reporting these factors increased significantly after journal interventions were implemented (21,22). Yet despite advances in required reporting by journals, fewer than 50% of published studies include rigor criteria such as randomization, blinding, and sample size estimation (23).…”
Section: Experimental Design Factorsmentioning
confidence: 99%
“…The literature search, random sampling, and data retrieval were described previously [19]. Briefly, the literature search was conducted in Medline (via PubMed) and Embase for all citations that referred to in vivo studies conducted on non-human vertebrates with one or more authors affiliated with at least one of five Danish universities of interest.…”
Section: Data Sources and Eligibility Criteriamentioning
confidence: 99%
“…These 250 publications were selected based on the random sampling allocation sequence. The PRISMA flow diagram is available in Kousholt et al [19].…”
Section: Data Sources and Eligibility Criteriamentioning
confidence: 99%