Missing data in clinical trials can bias estimates of treatment effects. Statisticians and government agencies recommend making every effort to minimize missing data. Although statistical methods are available to accommodate missing data, their validity depends on often untestable assumptions about why the data are missing. The objective of this study was to assess the frequency with which randomized clinical trials published in 3 major pain journals (ie, European Journal of Pain, Journal of Pain, and Pain) reported strategies to prevent missing data, the number of participants who completed the study (ie, completers), and statistical methods to accommodate missing data. A total of 161 randomized clinical trials investigating treatments for pain, published between 2006 and 2012, were included. Approximately two-thirds of the trials reported at least 1 method that could potentially minimize missing data, the most common being allowance of concomitant medications. Only 61% of the articles explicitly reported the number of patients who were randomized and completed the trial. Although only 14 articles reported that all randomized participants completed the study, fewer than 50% of the articles reported a statistical method to accommodate missing data. Last observation carried forward imputation was used most commonly (42%). Thirteen articles reported more than 1 method to accommodate missing data; however, the majority of methods, including last observation carried forward, were not methods currently recommended by statisticians. Authors, reviewers, and editors should prioritize proper reporting of missing data and appropriate use of methods to accommodate them so as to improve the deficiencies identified in this systematic review.
Themes identified from the interviews reinforced the patterns of past research. GPs are becoming more confident and comfortable with misusers, and more positive towards methadone and methadone maintenance treatment, but still feel that they lack the necessary knowledge and skills.
Successful procedural sedation represents a spectrum of patient- and clinician-related goals. The absence of a gold-standard measure of the efficacy of procedural sedation has led to a variety of outcomes being used in clinical trials, with the consequent lack of consistency among measures, making comparisons among trials and meta-analyses challenging. We evaluated which existing measures have undergone psychometric analysis in a procedural sedation setting and whether the validity of any of these measures support their use across the range of procedures for which sedation is indicated. Numerous measures were found to have been used in clinical research on procedural sedation across a wide range of procedures. However, reliability and validity have been evaluated for only a limited number of sedation scales, observer-rated pain/discomfort scales, and satisfaction measures in only a few categories of procedures. Typically, studies only examined 1 or 2 aspects of scale validity. The results are likely unique to the specific clinical settings they were tested in. Certain scales, for example, those requiring motor stimulation, are unsuitable to evaluate sedation for procedures where movement is prohibited (e.g., magnetic resonance imaging scans). Further work is required to evaluate existing measures for procedures for which they were not developed. Depending on the outcomes of these efforts, it might ultimately be necessary to consider measures of sedation efficacy to be procedure specific.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.