Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial’s course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adaptive design has been misleadingly used as an all-encompassing label to refer to certain methods that could be deemed controversial or that have been inadequately implemented.We believe that even if the planning and analysis of a trial is undertaken by an expert statistician, it is essential that the investigators understand the implications of using an adaptive design, for example, what the practical challenges are, what can (and cannot) be inferred from the results of such a trial, and how to report and communicate the results. This tutorial paper provides guidance on key aspects of adaptive designs that are relevant to clinical triallists. We explain the basic rationale behind adaptive designs, clarify ambiguous terminology and summarise the utility and pitfalls of adaptive designs. We discuss practical aspects around funding, ethical approval, treatment supply and communication with stakeholders and trial participants. Our focus, however, is on the interpretation and reporting of results from adaptive design trials, which we consider vital for anyone involved in medical research. We emphasise the general principles of transparency and reproducibility and suggest how best to put them into practice.
Adaptive designs (ADs) allow pre-planned changes to an ongoing trial without compromising the validity of conclusions and it is essential to distinguish pre-planned from unplanned changes that may also occur. The reporting of ADs in randomised trials is inconsistent and needs improving. Incompletely reported AD randomised trials are difficult to reproduce and are hard to interpret and synthesise. This consequently hampers their ability to inform practice as well as future research and contributes to research waste. Better transparency and adequate reporting will enable the potential benefits of ADs to be realised.This extension to the Consolidated Standards Of Reporting Trials (CONSORT) 2010 statement was developed to enhance the reporting of randomised AD clinical trials. We developed an Adaptive designs CONSORT Extension (ACE) guideline through a two-stage Delphi process with input from multidisciplinary key stakeholders in clinical trials research in the public and private sectors from 21 countries, followed by a consensus meeting. Members of the CONSORT Group were involved during the development process.The paper presents the ACE checklists for AD randomised trial reports and abstracts, as well as an explanation with examples to aid the application of the guideline. The ACE checklist comprises seven new items, nine modified items, six unchanged items for which additional explanatory text clarifies further considerations for ADs, and 20 unchanged items not requiring further explanatory text. The ACE abstract checklist has one new item, one modified item, one unchanged item with additional explanatory text for ADs, and 15 unchanged items not requiring further explanatory text.The intention is to enhance transparency and improve reporting of AD randomised trials to improve the interpretability of their results and reproducibility of their methods, results and inference. We also hope indirectly to facilitate the much-needed knowledge transfer of innovative trial designs to maximise their potential benefits.
BACKGROUND AND OBJECTIVES: The identification of life-threatening infection in febrile children presenting to the emergency department (ED) remains difficult. The quick Sequential Organ Failure Assessment (qSOFA) was only derived for adult populations, implying an urgent need for pediatric scores. We developed and validated a novel, adapted qSOFA score (Liverpool quick Sequential Organ Failure Assessment [LqSOFA]) and compared its performance with qSOFA, Pediatric Early Warning Score (PEWS), and National Institute for Health and Care Excellence (NICE) high-risk criteria in predicting critical care (CC) admission in febrile children presenting to the ED. METHODS: The LqSOFA (range, 0-4) incorporates age-adjusted heart rate, respiratory rate, capillary refill, and consciousness level on the Alert, Voice, Pain, Unresponsive scale. The primary outcome was CC admission within 48 hours of ED presentation, and the secondary outcome was sepsisrelated mortality. LqSOFA, qSOFA, PEWS, and NICE high-risk criteria scores were calculated, and performance characteristics, including area under the receiver operating characteristic curve, were calculated for each score. RESULTS: In the initial (n = 1121) cohort, 47 CC admissions (4.2%) occurred, and in the validation (n = 12 241) cohort, 135 CC admissions (1.1%) occurred, and there were 5 sepsis-related deaths. In the validation cohort, LqSOFA predicted CC admission with an area under the receiver operating characteristic curve of 0.81 (95% confidence interval [CI], 0.76 to 0.86), versus qSOFA (0.66; 95% CI, 0.60 to 0.71), PEWS (0.93; 95% CI, 0.90 to 0.95), and NICE high-risk criteria (0.81; 95% CI, 0.78 to 0.85). For predicting CC admission, the LqSOFA outperformed the qSOFA, with a net reclassification index of 10.4% (95% CI, 1.0% to 19.9%). CONCLUSIONS: In this large study, we demonstrate improved performance of the LqSOFA over qSOFA in identifying febrile children at risk for CC admission and sepsis-related mortality. Further validation is required in other settings. WHAT'S KNOWN ON THIS SUBJECT: The quick Sequential Organ Failure Assessment has been shown to more accurately predict mortality or ICU transfer than systemic inflammatory response syndrome or the quick Pediatric Logistic Organ Dysfunction-2 in an emergency department population, but with only moderate prognostic accuracy. WHAT THIS STUDY ADDS: In this retrospective study of .12 000 febrile children, the Liverpool quick Sequential Organ Failure Assessment outperforms the quick Sequential Organ Failure Assessment in predicting critical care admission. Liverpool quick Sequential Organ Failure Assessment is a rapid bedside tool that should undergo implementation testing.
BackgroundAdequate reporting of adaptive designs (ADs) maximises their potential benefits in the conduct of clinical trials. Transparent reporting can help address some obstacles and concerns relating to the use of ADs. Currently, there are deficiencies in the reporting of AD trials. To overcome this, we have developed a consensus-driven extension to the CONSORT statement for randomised trials using an AD. This paper describes the processes and methods used to develop this extension rather than detailed explanation of the guideline.MethodsWe developed the guideline in seven overlapping stages:Building on prior research to inform the need for a guideline;A scoping literature review to inform future stages;Drafting the first checklist version involving an External Expert Panel;A two-round Delphi process involving international, multidisciplinary, and cross-sector key stakeholders;A consensus meeting to advise which reporting items to retain through voting, and to discuss the structure of what to include in the supporting explanation and elaboration (E&E) document;Refining and finalising the checklist; andWriting-up and dissemination of the E&E document.The CONSORT Executive Group oversaw the entire development process.ResultsDelphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in rounds 1, 2, and across both rounds, respectively. Twenty-seven delegates from Europe, the USA, and Asia attended the consensus meeting. The main checklist has seven new and nine modified items and six unchanged items with expanded E&E text to clarify further considerations for ADs. The abstract checklist has one new and one modified item together with an unchanged item with expanded E&E text. The E&E document will describe the scope of the guideline, the definition of an AD, and some types of ADs and trial adaptations and explain each reporting item in detail including case studies.ConclusionsWe hope that making the development processes, methods, and all supporting information that aided decision-making transparent will enhance the acceptability and quick uptake of the guideline. This will also help other groups when developing similar CONSORT extensions. The guideline is applicable to all randomised trials with an AD and contains minimum reporting requirements.Electronic supplementary materialThe online version of this article (10.1186/s12916-018-1196-2) contains supplementary material, which is available to authorized users.
Adaptive designs for clinical trials permit alterations to a study in response to accumulating data in order to make trials more flexible, ethical, and efficient. These benefits are achieved while preserving the integrity and validity of the trial, through the pre-specification and proper adjustment for the possible alterations during the course of the trial. Despite much research in the statistical literature highlighting the potential advantages of adaptive designs over traditional fixed designs, the uptake of such methods in clinical research has been slow. One major reason for this is that different adaptations to trial designs, as well as their advantages and limitations, remain unfamiliar to large parts of the clinical community. The aim of this paper is to clarify where adaptive designs can be used to address specific questions of scientific interest; we introduce the main features of adaptive designs and commonly used terminology, highlighting their utility and pitfalls, and illustrate their use through case studies of adaptive trials ranging from early-phase dose escalation to confirmatory phase III studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.