– The COVID-19 pandemic has had and continues to have major impacts on planned and ongoing clinical trials. Its effects on trial data create multiple potential statistical issues. The scale of impact is unprecedented, but when viewed individually, many of the issues are well defined and feasible to address. A number of strategies and recommendations are put forward to assess and address issues related to estimands, missing data, validity and modifications of statistical analysis methods, need for additional analyses, ability to meet objectives and overall trial interpretability.
Criteria for treatment-resistant depression (TRD) and partially responsive depression (PRD) as subtypes of major depressive disorder (MDD) are not unequivocally defined. In the present document we used a Delphi-method-based consensus approach to define TRD and PRD and to serve as operational criteria for future clinical studies, especially if conducted for regulatory purposes. We reviewed the literature and brought together a group of international experts (including clinicians, academics, researchers, employees of pharmaceutical companies, regulatory bodies representatives, and one person with lived experience) to evaluate the state-of-the-art and main controversies regarding the current classification. We then provided recommendations on how to design clinical trials, and on how to guide research in unmet needs and knowledge gaps. This report will feed into one of the main objectives of the EUropean Patient-cEntric clinicAl tRial pLatforms, Innovative Medicines Initiative (EU-PEARL, IMI) MDD project, to design a protocol for platform trials of new medications for TRD/PRD.
A network meta-analysis allows a simultaneous comparison between treatments evaluated in randomised controlled trials that share at least one treatment with at least one other study. Estimates of treatment effects may be required for treatments across disconnected networks of evidence, which requires a different statistical approach and modelling assumptions to account for imbalances in prognostic variables and treatment effect modifiers between studies. In this paper, we review and discuss methods for comparing treatments evaluated in studies that form disconnected networks of evidence. Several methods have been proposed but assessing which are appropriate often depends on the clinical context as well as the availability of data. Most methods account for sampling variation but do not always account for others sources of uncertainty. We suggest that further research is required to assess the properties of methods and the use of approaches that allow the incorporation of external information to reflect parameter and structural uncertainty.
The GetReal consortium ("incorporating real-life data into drug development") addresses the efficacy-effectiveness gap that opens between the data from well-controlled randomized trials in selected patient groups submitted to regulators and the real-world evidence on effectiveness and safety of drugs required by decision makers. Workpackage 4 of GetReal develops evidence synthesis and modelling approaches to generate the real-world evidence. In this commentary, we discuss how questions change when moving from the well-controlled randomized trial setting to real-life medical practice, the evidence required to answer these questions, the populations to which estimates will be applicable to and the methods and data sources used to produce these estimates. We then introduce the methodological reviews written by GetReal authors and published in Research Synthesis Methods on network meta-analysis, individual patient data meta-analysis and mathematical modelling to predict drug effectiveness. The critical reviews of key methods are a good starting point for the ambitious programme of work GetReal has embarked on. The different strands of work under way in GetReal have great potential to contribute to making clinical trials research as relevant as it can be to patients, caregivers and policy makers. Copyright © 2016 John Wiley & Sons, Ltd.
dSubgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.
BackgroundGreater transparency, including sharing of patient-level data for further research, is an increasingly important topic for organisations who sponsor, fund and conduct clinical trials. This is a major paradigm shift with the aim of maximising the value of patient-level data from clinical trials for the benefit of future patients and society. We consider the analysis of shared clinical trial data in three broad categories: (1) reanalysis - further investigation of the efficacy and safety of the randomized intervention, (2) meta-analysis, and (3) supplemental analysis for a research question that is not directly assessing the randomized intervention.DiscussionIn order to support appropriate interpretation and limit the risk of misleading findings, analysis of shared clinical trial data should have a pre-specified analysis plan. However, it is not generally possible to limit bias and control multiplicity to the extent that is possible in the original trial design, conduct and analysis, and this should be acknowledged and taken into account when interpreting results. We highlight a number of areas where specific considerations arise in planning, conducting, interpreting and reporting analyses of shared clinical trial data. A key issue is that that these analyses essentially share many of the limitations of any post hoc analyses beyond the original specified analyses. The use of individual patient data in meta-analysis can provide increased precision and reduce bias. Supplemental analyses are subject to many of the same issues that arise in broader epidemiological analyses. Specific discussion topics are addressed within each of these areas.SummaryIncreased provision of patient-level data from industry and academic-led clinical trials for secondary research can benefit future patients and society. Responsible data sharing, including transparency of the research objectives, analysis plans and of the results will support appropriate interpretation and help to address the risk of misleading results and avoid unfounded health scares.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.