In response to overwhelming evidence and the consequences of poor-quality reporting of randomized, controlled trials (RCTs), many medical journals and editorial groups have now endorsed the CONSORT (Consolidated Standards of Reporting Trials) statement, a 22-item checklist and flow diagram. Because CONSORT primarily aimed at improving the quality of reporting of efficacy, only 1 checklist item specifically addressed the reporting of safety. Considerable evidence suggests that reporting of harms-related data from RCTs also needs improvement. Members of the CONSORT Group, including journal editors and scientists, met in Montebello, Quebec, Canada, in May 2003 to address this problem. The result is the following document: the standard CONSORT checklist with 10 new recommendations about reporting harms-related issues, accompanying explanation, and examples to highlight specific aspects of proper reporting. We hope that this document, in conjunction with other CONSORT-related materials (http://www.consort-statement.org), will help authors improve their reporting of harms-related data from RCTs. Better reporting will help readers critically appraise and interpret trial results. Journals can support this goal by revising Instructions to Authors so that they refer authors to this document.
This article concerns development and use of patient-reported outcomes (PROs) in clinical trials to evaluate medical products. A PRO is any report coming directly from patients, without interpretation by physicians or others, about how they function or feel in relation to a health condition and its therapy. PRO instruments are used to measure these patient reports. PROs provide a unique perspective on medical therapy, because some effects of a health condition and its therapy are known only to patients. Properly developed and evaluated PRO instruments also have the potential to provide more sensitive and specific measurements of the effects of medical therapies, thereby increasing the efficiency of clinical trials that attempt to measure the meaningful treatment benefits of those therapies. Poorly developed and evaluated instruments may provide misleading conclusions or data that cannot be used to support product labeling claims. We review selected major challenges from Food and Drug Administration's perspective in using PRO instruments, measures, and end points to support treatment benefit claims in product labeling. These challenges highlight the need for sponsors to formulate desired labeling claim(s) prospectively, to acquire and document information needed to support these claim(s), and to identify existing instruments or develop new and more appropriate PRO instruments for evaluating treatment benefit in the defined population in which they will seek claims.
Since 1998, the US Food and Drug Administration (FDA) has been exploring new automated and rapid Bayesian data mining techniques. These techniques have been used to systematically screen the FDA's huge MedWatch database of voluntary reports of adverse drug events for possible events of concern. The data mining method currently being used is the Multi-Item Gamma Poisson Shrinker (MGPS) program that replaced the Gamma Poisson Shrinker (GPS) program we originally used with the legacy database. The MGPS algorithm, the technical aspects of which are summarised in this paper, computes signal scores for pairs, and for higher-order (e.g. triplet, quadruplet) combinations of drugs and events that are significantly more frequent than their pair-wise associations would predict. MGPS generates consistent, redundant, and replicable signals while minimising random patterns. Signals are generated without using external exposure data, adverse event background information, or medical information on adverse drug reactions. The MGPS interface streamlines multiple input-output processes that previously had been manually integrated. The system, however, cannot distinguish between already-known associations and new associations, so the reviewers must filter these events. In addition to detecting possible serious single-drug adverse event problems, MGPS is currently being evaluated to detect possible synergistic interactions between drugs (drug interactions) and adverse events (syndromes), and to detect differences among subgroups defined by gender and by age, such as paediatrics and geriatrics. In the current data, only 3.4% of all 1.2 million drug-event pairs ever reported (with frequencies > or = 1) generate signals [lower 95% confidence interval limit of the adjusted ratios of the observed counts over expected (O/E) counts (denoted EB05) of > or = 2]. The total frequency count that contributed to signals comprised 23% (2.4 million) of the total number, 10.4 million of drug-event pairs reported, greatly facilitating a more focused follow-up and evaluation. The algorithm provides an objective, systematic view of the data alerting reviewers to critically important, new safety signals. The study of signals detected by current methods, signals stored in the Center for Drug Evaluation and Research's Monitoring Adverse Reports Tracking System, and the signals regarding cerivastatin, a cholesterol-lowering drug voluntarily withdrawn from the market in August 2001, exemplify the potential of data mining to improve early signal detection. The operating characteristics of data mining in detecting early safety signals, exemplified by studying a drug recently well characterised by large clinical trials confirms our experience that the signals generated by data mining have high enough specificity to deserve further investigation. The application of these tools may ultimately improve usage recommendations.
In recent years, the use of the last observation carried forward (LOCF) approach in imputing missing data in clinical trials has been greatly criticized, and several likelihood-based modeling approaches are proposed to analyze such incomplete data. One of the proposed likelihood-based methods is the Mixed-Effect Model Repeated Measure (MMRM) model. To compare the performance of LOCF and MMRM approaches in analyzing incomplete data, two extensive simulation studies are conducted, and the empirical bias and Type I error rates associated with estimators and tests of treatment effects under three missing data paradigms are evaluated. The simulation studies demonstrate that LOCF analysis can lead to substantial biases in estimators of treatment effects and can greatly inflate Type I error rates of the statistical tests, whereas MMRM analysis on the available data leads to estimators with comparatively small bias, and controls Type I error rates at a nominal level in the presence of missing completely at random (MCAR) or missing at random (MAR) and some possibility of missing not at random (MNAR) data. In a sensitivity analysis of 48 clinical trial datasets obtained from 25 New Drug Applications (NDA) submissions of neurological and psychiatric drug products, MMRM analysis appears to be a superior approach in controlling Type I error rates and minimizing biases, as compared to LOCF ANCOVA analysis. In the exploratory analyses of the datasets, no clear evidence of the presence of MNAR missingness is found.
With the advances in human genomic/genetic studies, the clinical trial community gradually recognizes that phenotypically homogeneous patients may be heterogeneous at the genomic level. The genomic technology brings a possible avenue for developing a genomic (composite) biomarker to predict a genomically responsive patient subset that may have a (much) higher likelihood of benefiting from a treatment. Randomized controlled trial is the mainstay to provide scientifically convincing evidence of a purported effect a new treatment may demonstrate. In conventional clinical trials, the primary clinical hypothesis pertains to the therapeutic effect in all patients who are eligible for the study defined by the primary efficacy endpoint. The aspect of one-size-fits-all surrounding the conventional design has been challenged, particularly when the diseases may be heterogeneous due to observable clinical characteristics and/or unobservable underlying the genomic characteristics. Extension from the conventional single population design objective to an objective that encompasses two possible patient populations will allow more informative evaluation in the patients having different degrees of responsiveness to medication. Building in conventional clinical trials, an additional genomic objective can generate an appealing conceptual framework from the patient's perspective in addressing personalized medicine in well-controlled clinical trials. There are many perceived benefits of personalized medicine that are based on the notion of being genomically proactive in the identification of disease and prevention of disease or recurrence. In this paper, we show that an adaptive design approach can be constructed to study a clinical hypothesis of overall treatment effect and a hypothesis of treatment effect in a genomic subset more efficiently than the conventional non-adaptive approach.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.. The P-value is a random variable derived from the distribution of the test statistic used to analyze a data set and to test a null hypothesis. Under the null hypothesis, the P-value based on a continuous test statistic has a uniform distribution over the interval [0,1], regardless of the sample size of the experiment. In contrast, the distribution of the P-value under the alternative hypothesis is a function of both sample size and the true value or range of true values of the tested parameter. International Biometric SocietyThe characteristics, such as mean and percentiles, of the P-value distribution can give valuable insight into how the P-value behaves for a variety of parameter values and sample sizes. Potential applications of the P-value distribution under the alternative hypothesis to the design, analysis, and interpretation of results of clinical trials are considered. IntroductionThe P-value is one of the most routinely used statistical measures of uncertainty, yet statisticians may in some situations (Goodman, 1992) disagree on its appropriate use and on its interpretation as a measure of evidence. The P-value is derived from the perspective of a test of hypothesis in which a test statistic is calculated from results of a given set of data and, under the assumption that the null hypothesis is true, the distribution of the test statistic is used to obtain the tail probability of observing that result or a more extreme result. Thus, the P-value is a measure of evidence against the null hypothesis. Because the P-value is based upon analysis of random variables, it itself is a random variable whose distribution, for continuous test statistics, is well known to be uniform over the interval [0, 1] under the null hypothesis. It is because of this fact that a cutoff for a P-value at, say 0.05, is used to control the chances that, for any given experiment, one of twenty P-values could be 0.05 or less, even when the null hypothesis is true. This concept, in the Neyman-Pearson theory of hypothesis testing, is known as the Type I error rate, which is a preexperiment error rate that determines the rejection region and is intended to control the overall frequency of making erroneous rejections of the null hypothesis. It is of interest that the distribution of the P-value, when the null hypothesis is true, is uniform over [0,1] regardless of the sample size of an experiment, so there is no way to distinguish P-values derived from large studies from those derived from small samples, nor from studies well powered to detect a posited alternative hypothesis from those underpowered to detect that same posited alternative value. Other statistical measures, such as confidence intervals, ...
At the request of the Food and Drug Administration (FDA) and with its funding, the Panel on the Handling of Missing Data in Clinical Trials was created by the National Research Council's Committee on National Statistics. This panel recently published a report(1) with recommendations that will be of use not only to the FDA but also to the entire clinical trial community so that the latter can take measures to improve the conduct and analysis of clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.