Human protection policies require favorable risk–benefit judgments prior to launch of clinical trials. For phase I and II trials, evidence for such judgment often stems from preclinical efficacy studies (PCESs). We undertook a systematic investigation of application materials (investigator brochures [IBs]) presented for ethics review for phase I and II trials to assess the content and properties of PCESs contained in them. Using a sample of 109 IBs most recently approved at 3 institutional review boards based at German Medical Faculties between the years 2010–2016, we identified 708 unique PCESs. We then rated all identified PCESs for their reporting on study elements that help to address validity threats, whether they referenced published reports, and the direction of their results. Altogether, the 109 IBs reported on 708 PCESs. Less than 5% of all PCESs described elements essential for reducing validity threats such as randomization, sample size calculation, and blinded outcome assessment. For most PCESs (89%), no reference to a published report was provided. Only 6% of all PCESs reported an outcome demonstrating no effect. For the majority of IBs (82%), all PCESs were described as reporting positive findings. Our results show that most IBs for phase I/II studies did not allow evaluators to systematically appraise the strength of the supporting preclinical findings. The very rare reporting of PCESs that demonstrated no effect raises concerns about potential design or reporting biases. Poor PCES design and reporting thwart risk–benefit evaluation during ethical review of phase I/II studies.
The effect of stage of disease on HRQOL is modest, although viral clearance is associated with higher HRQOL. HCV patients' HRQOL is strongly associated with concomitant illness and sociodemographic factors.
Poor study methodology leads to biased measurement of treatment effects in preclinical research. We used available sunitinib preclinical studies to evaluate relationships between study design and experimental tumor volume effect sizes. We identified published animal efficacy experiments where sunitinib monotherapy was tested for effects on tumor volume. Effect sizes were extracted alongside experimental design elements addressing threats to valid clinical inference. Reported use of practices to address internal validity threats was limited, with no experiments using blinded outcome assessment. Most malignancies were tested in one model only, raising concerns about external validity. We calculate a 45% overestimate of effect size across all malignancies due to potential publication bias. Pooled effect sizes for specific malignancies did not show apparent relationships with effect sizes in clinical trials, and we were unable to detect dose–response relationships. Design and reporting standards represent an opportunity for improving clinical inference.DOI: http://dx.doi.org/10.7554/eLife.08351.001
IMPORTANCE After a drug receives regulatory approval, researchers often pursue small, underpowered trials, called exploratory trials, aimed at testing additional indications. If favorable early findings from exploratory trials are not promptly followed by confirmatory trials, then physicians, patients, and payers can be left uncertain about a drug's clinical value (clinical agnosticism). Such findings may encourage the off-label use of ineffective drugs.OBJECTIVE To characterize the relationship between exploratory and confirmatory postapproval trials for the blockbuster drug, pregabalin (Lyrica).EVIDENCE REVIEW Ovid MEDLINE and Embase databases were used to identify clinical trials published prior to January 2018 and that tested the efficacy of pregabalin for nonapproved indications. Indications, trial outcomes, publication dates, and trial design elements were recorded. Time elapsed was calculated between the generation of clinical agnosticism about pregabalin (ie, publications reporting positive or inconclusive evidence of efficacy on a primary endpoint) and it being addressed (publication of at least 1 confirmatory trial in the same indication, regardless of outcome). FINDINGS There were 238 trials identified that tested the efficacy of pregabalin in at least 33 indications; 5 indications eventually received European Medicines Agency and/or US Food and Drug Administration marketing approval. Sixty-seven percent (22 of 33) of first publications for new indications may have generated clinical agnosticism. Of those indications with at least 5 years of follow-up, 63% (17 of 27) may have generated agnosticism that was not addressed in confirmatory trials within 5 years. As pregabalin development expanded from indications that received regulatory approval to other indications, the linkage of exploratory to confirmatory trial publication diminished.CONCLUSIONS AND RELEVANCE After initial approval, exploratory evidence suggesting the value of pregabalin for new indications often went unconfirmed for extended periods of time. Poor coordination between exploratory and confirmatory testing may represent an important vehicle through which off-label prescription is recommended in clinical practice guidelines and encouraged in the absence of confirmatory trial evidence.
Background Hepatitis C virus (HCV) infection is associated with substantial costs to patients, their caregivers and society. Aims We evaluated time costs (time spent seeking healthcare) and out‐of‐pocket (OOP) costs for patients with HCV and their caregivers. Methods We measured costs for 738 HCV outpatients in a tertiary‐care clinic using a patient‐completed questionnaire. Time and OOP costs were compared across disease stages and sociodemographic categories. We examined the association between cost and disease stage using linear regression adjusting for age, gender, marital status, education, income and Index of Coexistent Disease (ICED) comorbidity score. Costs were expressed in 2007 Canadian dollars. Results The mean annual time cost per patient was $2136 (98 h), and ranged from $281 (18 h) in individuals who had cleared the virus to $9416 in transplant recipients (420 h). Caregiver costs were reported in 10% of patients. The mean annual OOP cost per patient was $1326. Patients receiving active treatment and those with late‐stage disease spent $2500–2800 per year on HCV‐related healthcare, approximately 7% of their annual income. Patients who had cleared the virus had the lowest time and OOP costs. Low income and unemployed patients had higher costs. Conclusions In HCV‐infected individuals, OOP and time costs represent a significant economic burden and fall disproportionately upon those least able to afford them. The lower cost burden among those who were successfully treated suggests that wider use of antiviral therapy may reduce economic burden in addition to improving health outcomes.
Ethics is a growing interest for neuroscientists, but rather than signifying a commitment to the protection of human subjects, care of animals, and public understanding to which the professional community is engaged in a fundamental way, interest has been consumed by administrative overhead and the mission creep of institutional ethics reviews. Faculty, trainees, and staff (n = 605) in North America whose work involves brain imaging and brain stimulation completed an online survey about ethics in their research. Using factor analysis and linear regression, we found significant effects for invasiveness of imaging technique, professional position, gender, and local presence of bioethics centers. We propose strategies for improving communication between the neuroscience community and ethics review boards, collaborations between neuroscientists and biomedical ethicists, and ethics training in graduate neuroscience programs to revitalize mutual goals and interests.
Despite large efforts to test analgesics in animal models, only a handful of new pain drugs have shown efficacy in patients. Here, we report a systematic review and meta-analysis of preclinical studies of the commercially successful drug pregabalin. Our primary objective was to describe design characteristics and outcomes of studies testing the efficacy of pregabalin in behavioral models of pain. Secondarily, we examined the relationship between design characteristics and effect sizes. We queried MEDLINE, Embase, and BIOSIS to identify all animal studies testing the efficacy of pregabalin published before January 2018 and recorded experimental design elements addressing threats to validity and all necessary data for calculating effect sizes, expressed as the percentage of maximum possible effect. We identified 204 studies (531 experiments) assessing the efficacy of pregabalin in behavioral models of pain. The analgesic effect of pregabalin was consistently robust across every etiology/measure tested, even for pain conditions that have not responded to pregabalin in patients. Experiments did not generally report using design elements aimed at reducing threats to validity, and analgesic activity was typically tested in a small number of model systems. However, we were unable to show any clear relationships between preclinical design characteristics and effect sizes. Our findings suggest opportunities for improving the design and reporting of preclinical studies in pain. They also suggest that factors other than those explored in this study may be more important for explaining the discordance between outcomes in animal models of pain and those in clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.