The platform will undergo maintenance on Sep 14 at about 9:30 AM EST and will be unavailable for approximately 1 hour.
2014
DOI: 10.1371/journal.pone.0114872
|View full text |Cite
|
Sign up to set email alerts
|

A Common Control Group - Optimising the Experiment Design to Maximise Sensitivity

Abstract: Methods for choosing an appropriate sample size in animal experiments have received much attention in the statistical and biological literature. Due to ethical constraints the number of animals used is always reduced where possible. However, as the number of animals decreases so the risk of obtaining inconclusive results increases. By using a more efficient experimental design we can, for a given number of animals, reduce this risk. In this paper two popular cases are considered, where planned comparisons are … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 18 publications
0
14
0
Order By: Relevance
“… Pre‐registration of experimental design and intended methods of analysis is not yet common in our sector. We agree that optimally unbalanced groups can lead to improved sensitivity and power when the a priori decision is made to analyse them without ANOVA and with (for instance) Dunnett's tests back to a single comparator rather than all pairwise comparisons (Bate and Karp, ). However, in our experience, reviewers and editors often cannot tell whether experiments with unbalanced groups result from planned excellent design or unconsidered design and inadequate transparency, with attrition unreported and exclusions undeclared. Some investigators do not undertake blinded and randomized studies, and animals are added to, or removed from, the study after preliminary analysis.…”
mentioning
confidence: 69%
“… Pre‐registration of experimental design and intended methods of analysis is not yet common in our sector. We agree that optimally unbalanced groups can lead to improved sensitivity and power when the a priori decision is made to analyse them without ANOVA and with (for instance) Dunnett's tests back to a single comparator rather than all pairwise comparisons (Bate and Karp, ). However, in our experience, reviewers and editors often cannot tell whether experiments with unbalanced groups result from planned excellent design or unconsidered design and inadequate transparency, with attrition unreported and exclusions undeclared. Some investigators do not undertake blinded and randomized studies, and animals are added to, or removed from, the study after preliminary analysis.…”
mentioning
confidence: 69%
“…Some studies used larger animals, however, these studies used a smaller number of animals per group, possibly as a result of the cost, due to its not being practical for the majority of research installations, as well as presenting difficulties with handling during the realization of procedures [65]. In this context, the reduction in the number of experimental animals may be a complicating factor in research due to increased risk of obtaining inconclusive results [66].…”
Section: Discussionmentioning
confidence: 99%
“…Haimez 2002;Aban and George 2015;Singh et al 2016). Bate and Karp (2014) have looked more closely at the question of relative group sizes, and they show that the traditional, balanced approach is indeed the best experimental design when all pairwise comparisons are planned. However, when the planned comparisons are of several treatment groups with a single control group, there is a small gain in sensitivity by having relatively more animals in the control group.…”
Section: What Group Size For Control Groups?mentioning
confidence: 99%