1971
DOI: 10.1259/0007-1285-44-526-793
|View full text |Cite
|
Sign up to set email alerts
|

Repeated assessment of results in clinical trials of cancer treatment

Abstract: A clinical trial comparing two treatments for cancer is bound to extend over several years, partly because of the time required to collect sufficient numbers of patients and partly because of the time required before the results of treatment can be assessed. It is, therefore, common practice to make surveys of the results to date at intermediate stages of the trial. These preliminary assessments may be used to stop the trial if a new unproven treatment method is seen to be unexpectedly much worse than the usua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
210
0
3

Year Published

1990
1990
2020
2020

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 453 publications
(218 citation statements)
references
References 5 publications
2
210
0
3
Order By: Relevance
“…An independent data monitoring com mit tee monitored eff ectiveness and safety annually. The data monitoring committee used the Haybittle-Peto 11,12 approach for interim analyses using three standard errors as the cutoff for consideration of early cessation, preserving the type-one error rate across the trial.…”
Section: Study Design and Participantsmentioning
confidence: 99%
“…An independent data monitoring com mit tee monitored eff ectiveness and safety annually. The data monitoring committee used the Haybittle-Peto 11,12 approach for interim analyses using three standard errors as the cutoff for consideration of early cessation, preserving the type-one error rate across the trial.…”
Section: Study Design and Participantsmentioning
confidence: 99%
“…We included a stopping rule in which recruitment would be stopped at the midpoint for futility if the z-score was negative or for efficacy if the z-score was positive and the p value ≤ 0.001. 42,43 As we met the efficacy stopping rule, enrollment stopped after 100 patients. All patient-clinician encounters were analyzed according to the group they were randomly assigned.…”
Section: Sample Sizementioning
confidence: 99%
“…Various modeling methods have been developed to combine (meta-analyze) the results of such trials. 33 This approach could be useful in designing pediatric pilot studies, when funding and/or participants are difficult to obtain. However, this design will only be feasible when dealing with short-term temporary outcomes (eg, pain) and no carryover effect.…”
Section: Meta-analysis Of N-of-1 Trialsmentioning
confidence: 99%
“…Various ways of adjustment have been described. [33][34][35][36] Although sequential designs may on occasion lead to a smaller sample size when the interim data demonstrate definitively that 1 treatment is better than another, if there are serious safety concerns or further study is unlikely to demonstrate any difference, such designs cannot be relied on to reduce the sample size.…”
Section: Sequential Designsmentioning
confidence: 99%