We consider the problem of identifying a subgroup of patients who may have an enhanced treatment effect in a randomized clinical trial, and it is desirable that the subgroup be defined by a limited number of covariates. For this problem, the development of a standard, pre-determined strategy may help to avoid the well-known dangers of subgroup analysis. We present a method developed to find subgroups of enhanced treatment effect. This method, referred to as "Virtual Twins", involves predicting response probabilities for treatment and control "twins" for each subject. The difference in these probabilities is then used as the outcome in a classification or regression tree, which can potentially include any set of the covariates. We define a measure Q(Â) to be the difference between the treatment effect in estimated subgroup  and the marginal treatment effect. We present several methods developed to obtain an estimate of Q(Â), including estimation of Q(Â) using estimated probabilities in the original data, using estimated probabilities in newly simulated data, two cross-validation-based approaches and a bootstrap-based bias corrected approach. Results of a simulation study indicate that the Virtual Twins method noticeably outperforms logistic regression with forward selection when a true subgroup of enhanced treatment effect exists. Generally, large sample sizes or strong enhanced treatment effects are needed for subgroup estimation. As an illustration, we apply the proposed methods to data from a randomized clinical trial.
Defining the scientific questions of interest in a clinical trial is crucial to align its planning, design, conduct, analysis, and interpretation. However, practical experience shows that oftentimes specific choices in the statistical analysis blur the scientific question either in part or even completely, resulting in misalignment between trial objectives, conduct, analysis, and confusion in interpretation. The need for more clarity was highlighted by the Steering Committee of the International Council for Harmonization (ICH) in 2014, which endorsed a Concept Paper with the goal of developing a new regulatory guidance, suggested to be an addendum to ICH guideline E9. Triggered by these developments, we elaborate in this paper what the relevant questions in drug development are and how they fit with the current practice of intention-to-treat analyses. To this end, we consider the perspectives of patients, physicians, regulators, and payers. We argue that despite the different backgrounds and motivations of the various stakeholders, they all have similar interests in what the clinical trial estimands should be. Broadly, these can be classified into estimands addressing (a) lack of adherence to treatment due to different reasons and (b) efficacy and safety profiles when patients, in fact, are able to adhere to the treatment for its intended duration. We conclude that disentangling adherence to treatment and the efficacy and safety of treatment in patients that adhere leads to a transparent and clinical meaningful assessment of treatment risks and benefits. We touch upon statistical considerations and offer a discussion of additional implications. Copyright © 2016 John Wiley & Sons, Ltd.
Summary In comparing two treatments with the event time observations, the hazard ratio (HR) estimate is routinely used to quantify the treatment difference. However, this model dependent estimate may be difficult to interpret clinically especially when the proportional hazards (PH) assumption is violated. An alternative estimation procedure for treatment efficacy based on the restricted means survival time or t-year mean survival time (t-MST) has been discussed extensively in the statistical and clinical literature. On the other hand, a statistical test via the HR or its asymptotically equivalent counterpart, the logrank test, is asymptotically distribution-free. In this paper, we assess the relative efficiency of the hazard ratio and t-MST tests with respect to the statistical power under various PH and non-PH models theoretically and empirically. When the PH assumption is valid, the t-MST test performs almost as well as the HR test. For non-PH models, the t-MST test can substantially outperform its HR counterpart. On the other hand, the HR test can be powerful when the true difference of two survival functions is quite large at end but not the beginning of the study. Unfortunately, for this case, the HR estimate may not have a simple clinical interpretation for the treatment effect due to the violation of the PH assumption.
Randomized controlled trials remain a gold standard in evaluating the efficacy and safety of a new treatment. Ideally, patients adhere to their treatments for the duration of the study, and the resulting data can be analyzed unambiguously for efficacy and safety outcomes. However, some patients may discontinue the study treatment due to intercurrent events, which leaves missing observations or observations that do not reflect the randomly assigned treatment. Frequently, an intent-to-treat analysis (or a modification thereof) is done to estimate the treatment effect for all randomized patients regardless of the occurrence of intercurrent events. Alternatively, clinicians may be more interested in understanding the efficacy and safety for those who can adhere to the study treatment. The naive per-protocol analysis may provide a biased estimate for the treatment difference because the observed adherence populations may not be comparable between two treatments. In this article, we propose two methods for estimation of the treatment difference for those who can adhere to one or both treatments based on the counterfactual framework. Theoretical derivations and a simulation study show the proposed methods provide consistent estimators for the treatment difference for the adherent population of interest. A real data example comparing two basal insulins for patients with type-1 diabetes is provided using the proposed methods. Supplementary materials for this article are available online.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.