2023
DOI: 10.1214/22-sts865
|View full text |Cite
|
Sign up to set email alerts
|

Response-Adaptive Randomization in Clinical Trials: From Myths to Practical Considerations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 142 publications
0
8
0
Order By: Relevance
“…In this paper, we propose a metric for comparing group sequential designs based on the cohort most acutely impacted by the choice of design and illustrate how this metric may be applied to select a design in the ARREST and ACCESS contexts. RAR designs are commonly compared using inferential and estimation metrics (e.g., type I error, power, and bias) rather than measures of patient benefit, which remain underreported and have received little attention in the RAR literature (Robertson et al., 2020). This is in part because existing patient benefit metrics, including the expected number of trial failures, the proportion of patients assigned to the inferior arm, and the probability of a treatment imbalance in the wrong direction, are often limited by failures to hold type I and II error rates constant or to account for the different sample size requirements of the designs under consideration (Karrison et al., 2003; Morgan and Coad, 2007; Zhu and Hu, 2010; Robertson et al., 2020).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…In this paper, we propose a metric for comparing group sequential designs based on the cohort most acutely impacted by the choice of design and illustrate how this metric may be applied to select a design in the ARREST and ACCESS contexts. RAR designs are commonly compared using inferential and estimation metrics (e.g., type I error, power, and bias) rather than measures of patient benefit, which remain underreported and have received little attention in the RAR literature (Robertson et al., 2020). This is in part because existing patient benefit metrics, including the expected number of trial failures, the proportion of patients assigned to the inferior arm, and the probability of a treatment imbalance in the wrong direction, are often limited by failures to hold type I and II error rates constant or to account for the different sample size requirements of the designs under consideration (Karrison et al., 2003; Morgan and Coad, 2007; Zhu and Hu, 2010; Robertson et al., 2020).…”
Section: Introductionmentioning
confidence: 99%
“…RAR designs are commonly compared using inferential and estimation metrics (e.g., type I error, power, and bias) rather than measures of patient benefit, which remain underreported and have received little attention in the RAR literature (Robertson et al., 2020). This is in part because existing patient benefit metrics, including the expected number of trial failures, the proportion of patients assigned to the inferior arm, and the probability of a treatment imbalance in the wrong direction, are often limited by failures to hold type I and II error rates constant or to account for the different sample size requirements of the designs under consideration (Karrison et al., 2003; Morgan and Coad, 2007; Zhu and Hu, 2010; Robertson et al., 2020). One approach to correct for the latter issue is to compare designs with respect to the expected number of failures within a finite patient horizon (Villar et al., 2015, a) (Villar et al., 2015, b).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…There has been intense debate in the clinical trials literature regarding the merits and perils of adaptive randomization. A recent review by Robertson et al 4 provides an extensive summary of the available methods including concerns raised for their use in clinical trials and potential approaches for mitigation. Commonly cited areas of concern include sample size imbalance in the opposite direction, loss of statistical power, biased effect estimates, and potential for invalid inferences with small samples in the frequentist framework because of the correlation between treatment assignment and outcome induced by the adaptation.…”
mentioning
confidence: 99%
“…The jury seems to be out on this question with numerous influential voices in the statistical and clinical trials community landing on opposite sides of the debate. [4][5][6][7] The authors of the current manuscript include a simulation study in the supplemental materials with the intent to demonstrate that the use of response-adaptive randomization resulted in fewer patients being randomly assigned to CC-115 relative to a conventional 1:1:1:1 randomized design. This is misleading as the enrollment to the CC-115 arm was restricted by the 3 1 3 design used for the safety lead-in.…”
mentioning
confidence: 99%