2003
DOI: 10.1002/sim.1457
|View full text |Cite
|
Sign up to set email alerts
|

Mid‐course sample size modification in clinical trials based on the observed treatment effect

Abstract: It is not uncommon to set the sample size in a clinical trial to attain specified power at a value for the treatment effect deemed likely by the experimenters, even though a smaller treatment effect would still be clinically important. Recent papers have addressed the situation where such a study produces only weak evidence of a positive treatment effect at an interim stage and the organizers wish to modify the design in order to increase the power to detect a smaller treatment effect than originally expected.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
160
1

Year Published

2005
2005
2014
2014

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 163 publications
(162 citation statements)
references
References 27 publications
1
160
1
Order By: Relevance
“…t o under which it may be decided later that higher power is desirable. In our experience, adaptations which make a noticeable change to a test's power curve are liable to introduce inefficiency at least as great as that seen in our two examples and often much larger; see Jennison & Turnbull (2003) for an example of a two-stage adaptive design with much higher efficiency loss. In the following sections we complement this empirical evidence with theory and numerical evaluation of optimal tests within well-defined adaptive and non-adaptive classes.…”
Section: Discussion Of Examplesmentioning
confidence: 70%
See 2 more Smart Citations
“…t o under which it may be decided later that higher power is desirable. In our experience, adaptations which make a noticeable change to a test's power curve are liable to introduce inefficiency at least as great as that seen in our two examples and often much larger; see Jennison & Turnbull (2003) for an example of a two-stage adaptive design with much higher efficiency loss. In the following sections we complement this empirical evidence with theory and numerical evaluation of optimal tests within well-defined adaptive and non-adaptive classes.…”
Section: Discussion Of Examplesmentioning
confidence: 70%
“…More recently, Cui et al (1999), L. D. Fisher (1998), Shen & Fisher (1999) and Müller & Schäfer (2001), among others, have proposed a variety of methods that preserve the type I error rate despite completely unplanned design changes. Although differing in appearance and derivation, these methods are closely related in that each preserves the conditional type I error probability whenever the design is modified; Jennison & Turnbull (2003) prove this must be the case for any unplanned re-design that preserves the overall type I error rate.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Adaptive sample size reestimation avoids these pitfalls and can reduce the expected sample size, and in turn the cost of the study, under a range of treatment effects. Protocols and procedures for re-specification of sample size are well described in the literature [4,[17][18][19][20][21] . This type of adaptive design can arguably reduce time and cost, but does not specifically deal with optimizing inclusion/exclusion criteria.…”
Section: Discussionmentioning
confidence: 99%
“…Statistics in Medicine has published articles on the subject throughout the 25 years of the journal. In the early years, the emphasis was perhaps more on the application of techniques described elsewhere (for example [11][12][13][14]), but more recently the focus has shifted, with authors submitting more methodological manuscripts to the journal (for example [15][16][17][18]). …”
Section: Introductionmentioning
confidence: 99%