ObjectiveWe assessed the adequacy of randomized controlled trial (RCT) registration, changes to registration data and reporting completeness for articles in ICMJE journals during 2.5 years after registration requirement policy.MethodsFor a set of 149 reports of 152 RCTs with ClinicalTrials.gov registration number, published from September 2005 to April 2008, we evaluated the completeness of 9 items from WHO 20-item Minimum Data Set relevant for assessing trial quality. We also assessed changes to the registration elements at the Archive site of ClinicalTrials.gov and compared published and registry data.ResultsRCTs were mostly registered before 13 September 2005 deadline (n = 101, 66.4%); 118 (77.6%) started recruitment before and 31 (20.4%) after registration. At the time of registration, 152 RCTs had a total of 224 missing registry fields, most commonly ‘Key secondary outcomes’ (44.1% RCTs) and ‘Primary outcome’ (38.8%). More RCTs with post-registration recruitment had missing Minimum Data Set items than RCTs with pre-registration recruitment: 57/118 (48.3%) vs. 24/31 (77.4%) (χ2 1 = 7.255, P = 0.007). Major changes in the data entries were found for 31 (25.2%) RCTs. The number of RCTs with differences between registered and published data ranged from 21 (13.8%) for Study type to 118 (77.6%) for Target sample size.ConclusionsICMJE journals published RCTs with proper registration but the registration data were often not adequate, underwent substantial changes in the registry over time and differed in registered and published data. Editors need to establish quality control procedures in the journals so that they continue to contribute to the increased transparency of clinical trials.
The systematic use of evidence to inform healthcare decisions, particularly health technology assessment (HTA), has gained increased recognition. HTA has become a standard policy tool for informing decision makers who must manage the entry and use of pharmaceuticals, medical devices, and other technologies (including complex interventions) within health systems, for example, through reimbursement and pricing. Despite increasing attention to HTA activities, there has been no attempt to comprehensively synthesize good practices or emerging good practices to support populationbased decision-making in recent years. After the identification of some good practices through the release of the ISPOR Guidelines Index in 2013, the ISPOR HTA Council identified a need to more thoroughly review existing guidance. The purpose of this effort was to create a basis for capacity building, education, and improved consistency in approaches to HTA-informed decision-making. Our findings suggest that although many good practices have been developed in areas of assessment and some other key aspects of defining HTA processes, there are also many areas where good practices are lacking. This includes good practices in defining the organizational aspects of HTA, the use of deliberative processes, and measuring the impact of HTA. The extent to which these good practices are used and applied by HTA bodies is beyond the scope of this report, but may be of interest to future researchers.
The opportunity cost of inappropriate health policy decisions is greater in Central and Eastern European (CEE) compared with Western European (WE) countries because of poorer population health and more limited healthcare resources. Application of health technology assessment (HTA) prior to healthcare financing decisions can improve the allocative efficiency of scarce resources. However, few CEE countries have a clear roadmap for HTA implementation. Examples from high‐income countries may not be directly relevant, as CEE countries cannot allocate so much financial and human resources for substantiating policy decisions with evidence.Our objective was to describe the main HTA implementation scenarios in CEE countries and summarize the most important questions related to capacity building, financing HTA research, process and organizational structure for HTA, standardization of HTA methodology, use of local data, scope of mandatory HTA, decision criteria, and international collaboration in HTA.Although HTA implementation strategies from the region can be relevant examples for other CEE countries with similar cultural environment and economic status, HTA roadmaps are not still fully transferable without taking into account country‐specific aspects, such as country size, gross domestic product per capita, major social values, public health priorities, and fragmentation of healthcare financing. Copyright © 2016 John Wiley & Sons, Ltd.
BackgroundEvaluation of integrated care programmes for individuals with multi-morbidity requires a broader evaluation framework and a broader definition of added value than is common in cost-utility analysis. This is possible through the use of Multi-Criteria Decision Analysis (MCDA).Methods and resultsThis paper presents the seven steps of an MCDA to evaluate 17 different integrated care programmes for individuals with multi-morbidity in 8 European countries participating in the 4-year, EU-funded SELFIE project. In step one, qualitative research was undertaken to better understand the decision-context of these programmes. The programmes faced decisions related to their sustainability in terms of reimbursement, continuation, extension, and/or wider implementation. In step two, a uniform set of decision criteria was defined in terms of outcomes measured across the 17 programmes: physical functioning, psychological well-being, social relationships and participation, enjoyment of life, resilience, person-centeredness, continuity of care, and total health and social care costs. These were supplemented by programme-type specific outcomes. Step three presents the quasi-experimental studies designed to measure the performance of the programmes on the decision criteria. Step four gives details of the methods (Discrete Choice Experiment, Swing Weighting) to determine the relative importance of the decision criteria among five stakeholder groups per country. An example in step five illustrates the value-based method of MCDA by which the performance of the programmes on each decision criterion is combined with the weight of the respective criterion to derive an overall value score. Step six describes how we deal with uncertainty and introduces the Conditional Multi-Attribute Acceptability Curve. Step seven addresses the interpretation of results in stakeholder workshops.DiscussionBy discussing our solutions to the challenges involved in creating a uniform MCDA approach for the evaluation of different programmes, this paper provides guidance to future evaluations and stimulates debate on how to evaluate integrated care for multi-morbidity.Electronic supplementary materialThe online version of this article (10.1186/s12913-018-3367-4) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.