Although missing outcome data are an important problem in randomized trials and observational studies, methods to address this issue can be difficult to apply. Using simulated data, the authors compared 3 methods to handle missing outcome data: 1) complete case analysis; 2) single imputation; and 3) multiple imputation (all 3 with and without covariate adjustment). Simulated scenarios focused on continuous or dichotomous missing outcome data from randomized trials or observational studies. When outcomes were missing at random, single and multiple imputations yielded unbiased estimates after covariate adjustment. Estimates obtained by complete case analysis with covariate adjustment were unbiased as well, with coverage close to 95%. When outcome data were missing not at random, all methods gave biased estimates, but handling missing outcome data by means of 1 of the 3 methods reduced bias compared with a complete case analysis without covariate adjustment. Complete case analysis with covariate adjustment and multiple imputation yield similar estimates in the event of missing outcome data, as long as the same predictors of missingness are included. Hence, complete case analysis with covariate adjustment can and should be used as the analysis of choice more often. Multiple imputation, in addition, can accommodate the missing-not-at-random scenario more flexibly, making it especially suited for sensitivity analyses.
ObjectiveTo develop and prospectively evaluate a method of epileptic seizure detection combining heart rate and movement.MethodsIn this multicenter, in-home, prospective, video-controlled cohort study, nocturnal seizures were detected by heart rate (photoplethysmography) or movement (3-D accelerometry) in persons with epilepsy and intellectual disability. Participants with >1 monthly major seizure wore a bracelet (Nightwatch) on the upper arm at night for 2 to 3 months. Major seizures were tonic-clonic, generalized tonic >30 seconds, hyperkinetic, or others, including clusters (>30 minutes) of short myoclonic/tonic seizures. The video of all events (alarms, nurse diaries) and 10% completely screened nights were reviewed to classify major (needing an alarm), minor (needing no alarm), or no seizure. Reliability was tested by interobserver agreement. We determined device performance, compared it to a bed sensor (Emfit), and evaluated the caregivers’ user experience.ResultsTwenty-eight of 34 admitted participants (1,826 nights, 809 major seizures) completed the study. Interobserver agreement (major/no major seizures) was 0.77 (95% confidence interval [CI] 0.65–0.89). Median sensitivity per participant amounted to 86% (95% CI 77%–93%); the false-negative alarm rate was 0.03 per night (95% CI 0.01–0.05); and the positive predictive value was 49% (95% CI 33%–64%). The multimodal sensor showed a better sensitivity than the bed sensor (n = 14, median difference 58%, 95% CI 39%–80%, p < 0.001). The caregivers' questionnaire (n = 33) indicated good sensor acceptance and usability according to 28 and 27 participants, respectively.ConclusionCombining heart rate and movement resulted in reliable detection of a broad range of nocturnal seizures.
Master protocols have received a growing interest during the last years. By assigning patients to specific substudies, they aim at targeting and accelerating clinical development. Given their complexity, basket, umbrella, and platform designs have raised challenging regulatory and statistical questions, especially the control of multiplicity in confirmatory trials. In basket trials, regulatory assessment of the benefit/risk in pooled populations and choice of the treatment indication is challenging. We provide here our perspectives on these topics. In master protocols, as long as the statistical hypotheses tested between the different substudies are independent, no supplementary adjustment for multiplicity over the different substudies should be required. Moreover, sharing a control arm within an umbrella or a platform trial investigating different drugs would not require a correction for the type I error rate, whereas the chance of multiple false positive regulatory decisions should be recognized. In basket trials, pooling across substudies requires a rationale supporting the intended indication and should be preplanned. Assessment of the benefit/risk in pooled target populations can be complicated by differences in design or in efficacy/safety signals between the substudies. While trials governed by a master protocol can offer logistic and financial advantages, more experience is needed to gain a deeper insight into this novel framework.
Despite the use of preventive selective arterial embolization, patients with TSC exhibit clinically significant kidney disease and excess mortality, largely because of kidney-related complications.
BackgroundA non-inferiority (NI) trial is intended to show that the effect of a new treatment is not worse than the comparator. We conducted a review to identify how NI trials were conducted and reported, and whether the standard requirements from the guidelines were followed.Methodology and Principal FindingsFrom 300 randomly selected articles on NI trials registered in PubMed at 5 February 2009, we included 227 NI articles that referred to 232 trials. We excluded studies on bioequivalence, trials on healthy volunteers, non-drug trials, and articles of which the full-text version could not be retrieved. A large proportion of trials (34.0%) did not use blinding. The NI margin was reported in 97.8% of the trials, but only 45.7% of the trials reported the method to determine the margin. Most of the trials used either intention to treat (ITT) (34.9%) or per-protocol (PP) analysis (19.4%), while 41.8% of the trials used both methods. Less than 10% of the trials included a placebo arm to confirm the efficacy of the new drug and active comparator against placebo, and less than 5.0% were reporting the similarity of the current trial with the previous comparator's trials. In general, no difference was seen in the quality of reporting before and after the release of the CONSORT statement extension 2006 or between the high-impact and low-impact journals.ConclusionThe conduct and reporting of NI trials can be improved, particularly in terms of maximizing the use of blinding, the use of both ITT and PP analysis, reporting the similarity with the previous comparator's trials to guarantee a valid constancy assumption, and most importantly reporting the method to determine the NI margin.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.