BackgroundThe surfaces of the bones in the knee are covered with articular cartilage, a rubber-like substance that is very smooth, allowing frictionless movement in the joint and acting as a shock absorber. The cells that form the cartilage are called chondrocytes. Natural cartilage is called hyaline cartilage. Articular cartilage has very little capacity for self-repair, so damage may be permanent. Various methods have been used to try to repair cartilage. Autologous chondrocyte implantation (ACI) involves laboratory culture of cartilage-producing cells from the knee and then implanting them into the chondral defect.ObjectiveTo assess the clinical effectiveness and cost-effectiveness of ACI in chondral defects in the knee, compared with microfracture (MF).Data sourcesA broad search was done in MEDLINE, EMBASE, The Cochrane Library, NHS Economic Evaluation Database and Web of Science, for studies published since the last Health Technology Assessment review.Review methodsSystematic review of recent reviews, trials, long-term observational studies and economic evaluations of the use of ACI and MF for repairing symptomatic articular cartilage defects of the knee. A new economic model was constructed. Submissions from two manufacturers and the ACTIVE (Autologous Chondrocyte Transplantation/Implantation Versus Existing Treatment) trial group were reviewed. Survival analysis was based on long-term observational studies.ResultsFour randomised controlled trials (RCTs) published since the last appraisal provided evidence on the efficacy of ACI. The SUMMIT (Superiority of Matrix-induced autologous chondrocyte implant versus Microfracture for Treatment of symptomatic articular cartilage defects) trial compared matrix-applied chondrocyte implantation (MACI®) against MF. The TIG/ACT/01/2000 (TIG/ACT) trial compared ACI with characterised chondrocytes against MF. The ACTIVE trial compared several forms of ACI against standard treatments, mainly MF. In the SUMMIT trial, improvements in knee injury and osteoarthritis outcome scores (KOOSs), and the proportion of responders, were greater in the MACI group than in the MF group. In the TIG/ACT trial there was improvement in the KOOS at 60 months, but no difference between ACI and MF overall. Patients with onset of symptoms < 3 years’ duration did better with ACI. Results from ACTIVE have not yet been published. Survival analysis suggests that long-term results are better with ACI than with MF. Economic modelling suggested that ACI was cost-effective compared with MF across a range of scenarios.LimitationsThe main limitation is the lack of RCT data beyond 5 years of follow-up. A second is that the techniques of ACI are evolving, so long-term data come from trials using forms of ACI that are now superseded. In the modelling, we therefore assumed that durability of cartilage repair as seen in studies of older forms of ACI could be applied in modelling of newer forms. A third is that the high list prices of chondrocytes are reduced by confidential discounting. The main research needs are for longer-term follow-up and for trials of the next generation of ACI.ConclusionsThe evidence base for ACI has improved since the last appraisal by the National Institute for Health and Care Excellence. In most analyses, the incremental cost-effectiveness ratios for ACI compared with MF appear to be within a range usually considered acceptable. Research is needed into long-term results of new forms of ACI.Study registrationThis study is registered as PROSPERO CRD42014013083.FundingThe National Institute for Health Research Health Technology Assessment programme.
A systematic review of studies of the cost-effectiveness of telemedicine and telecare was undertaken from 1990 until September 2010. Twelve databases were searched, using economic evaluation terms combined with telemedicine terms. The search identified 80 studies which were classed as full economic evaluations; the majority (38) were cost-consequence analyses. There were 15 cost-effectiveness analyses (CEA) and seven cost-utility analyses (CUA). In the period January 2004 to September 2010 there were 47 studies. Eleven were CEA and seven were CUA. Economic tools are being increasingly used for telemedicine and telecare studies, although better reporting of the methodologies and findings of the economic evaluations is required. Nonetheless, the results of the review were consistent with previous findings, i.e. there is no further conclusive evidence that telemedicine and telecare interventions are cost-effective compared to conventional health care.
Background Gastroenteritis is a common, transient disorder usually caused by infection and characterised by the acute onset of diarrhoea. Multiplex gastrointestinal pathogen panel (GPP) tests simultaneously identify common bacterial, viral and parasitic pathogens using molecular testing. By providing test results more rapidly than conventional testing methods, GPP tests might positively influence the treatment and management of patients presenting in hospital or in the community. Objective To systematically review the evidence for GPP tests [xTAG® (Luminex, Toronto, ON, Canada), FilmArray (BioFire Diagnostics, Salt Lake City, UT, USA) and Faecal Pathogens B (AusDiagnostics, Beaconsfield, NSW, Australia)] and to develop a de novo economic model to compare the cost-effectiveness of GPP tests with conventional testing in England and Wales. Data sources Multiple electronic databases including MEDLINE, EMBASE, Web of Science and the Cochrane Database were searched from inception to January 2016 (with supplementary searches of other online resources). Review methods Eligible studies included patients with acute diarrhoea; comparing GPP tests with standard microbiology techniques; and patient, management, test accuracy or cost-effectiveness outcomes. Quality assessment of eligible studies used tailored Quality Assessment of Diagnostic Accuracy Studies-2, Consolidated Health Economic Evaluation Reporting Standards and Philips checklists. The meta-analysis included positive and negative agreement estimated for each pathogen. A de novo decision tree model compared patients managed with GPP testing or comparable coverage with patients managed using conventional tests, within the Public Health England pathway. Economic models included hospital and community management of patients with suspected gastroenteritis. The model estimated costs (in 2014/15 prices) and quality-adjusted life-year losses from a NHS and Personal Social Services perspective. Results Twenty-three studies informed the review of clinical evidence (17 xTAG, four FilmArray, two xTAG and FilmArray, 0 Faecal Pathogens B). No study provided an adequate reference standard with which to compare the test accuracy of GPP with conventional tests. A meta-analysis (of 10 studies) found considerable heterogeneity; however, GPP testing produces a greater number of pathogen-positive findings than conventional testing. It is unclear whether or not these additional ‘positives’ are clinically important. The review identified no robust evidence to inform consequent clinical management of patients. There is considerable uncertainty about the cost-effectiveness of GPP panels used to test for suspected infectious gastroenteritis in hospital and community settings. Uncertainties in the model include length of stay, assumptions about false-positive findings and the costs of tests. Although there is potential for cost-effectiveness in both settings, key modelling assumptions need to be verified and model findings remain tentative. Limitations No test–treat trials were retrieved. The economic model reflects one pattern of care, which will vary across the NHS. Conclusions The systematic review and cost-effectiveness model identify uncertainties about the adoption of GPP tests within the NHS. GPP testing will generally correctly identify pathogens identified by conventional testing; however, these tests also generate considerable additional positive results of uncertain clinical importance. Future work An independent reference standard may not exist to evaluate alternative approaches to testing. A test–treat trial might ascertain whether or not additional GPP ‘positives’ are clinically important or result in overdiagnoses, whether or not earlier diagnosis leads to earlier discharge in patients and what the health consequences of earlier intervention are. Future work might also consider the public health impact of different testing treatments, as test results form the basis for public health surveillance. Study registration This study is registered as PROSPERO CRD2016033320. Funding The National Institute for Health Research Health Technology Assessment programme.
BackgroundIn contrast to other pregnancy complications the economic impact of stillbirth is poorly understood. We aimed to carry out a preliminary exploration of the healthcare costs of stillbirth from the time of pregnancy loss and the period afterwards; also to explore and include the impact of a previous stillbirth on the healthcare costs of the next pregnancy.MethodsA structured review of the literature including cost studies and description of costs to health-care providers for care provided at the time of stillbirth and in a subsequent pregnancy. Costs in a subsequent pregnancy were compared in three alternative models of care for multiparous women developed from national guidelines and expert opinion: i) “low risk” women who had a live birth, ii) “high risk” women who had a live birth and iii) women with a previous stillbirth.ResultsThe costs to the National Health Service (NHS) for investigation immediately following stillbirth ranged from £1,242 (core recommended investigations) to £1,804 (comprehensive investigation). The costs in the next pregnancy following a stillbirth ranged from £2,147 (low-risk woman with a previous healthy child) to £3,751 (Woman with a previous stillbirth of unknown cause). The cost in the next pregnancy following a stillbirth due to a known recurrent or an unknown cause is almost £500 greater than the pregnancy following a stillbirth due to a known non-recurrent cause.ConclusionsThe study has highlighted the paucity of evidence regarding economic issues surrounding stillbirth. Women who have experienced a previous stillbirth are likely to utilise more health care services in their next pregnancy particularly where no cause is found. Every effort should be made to determine the cause of stillbirth to reduce the overall cost to the NHS. The cost associated with identifying the cause of stillbirth could offset the costs of care in the next pregnancy. Future research should concentrate on robust studies looking into the wider economic impact of stillbirth.
To determine whether the recommended screening interval for diabetic retinopathy (DR) in the UK can safely be extended beyond 1 year. Systematic review of clinical and cost-effectiveness studies. Nine databases were searched with no date restrictions. Randomised controlled trials (RCTs), cohort studies, prognostic or economic modelling studies which described the incidence and progression of DR in populations with type 1 diabetes mellitus or type 2 diabetes mellitus of either sex and of any age reporting incidence and progression of DR in relation to screening interval (vs annual screening interval) and/or prognostic factors were included. Narrative synthesis was undertaken. 14 013 papers were identified, of which 11 observational studies, 5 risk stratification modelling studies and 9 economic studies were included. Data were available for 262 541 patients of whom at least 228 649 (87%) had type 2 diabetes. There were no RCTs. Studies concluded that there is little difference between clinical outcomes from screening 1 yearly or 2 yearly in low-risk patients. However there was high loss to follow-up (13–31%), heterogeneity in definitions of low risk and variation in screening and grading protocols for prior retinopathy results. Observational and economic modelling studies in low-risk patients show little difference in clinical outcomes between 1-year and 2-year screening intervals. The lack of experimental research designs and heterogeneity in definition of low risk considerably limits the reliability and validity of this conclusion. Cost-effectiveness findings were mixed. There is insufficient evidence to recommend a move to extend the screening interval beyond 1 year.
Telemedicine was perceived by cardiologists, district clinicians, and families as reliable and efficient. The equivocal 6-month cost results indicate that investment in the technology is warranted to enhance pediatric and perinatal cardiology services.
Objective To compare 10 year revision rates for frequently used types of primary total hip replacement to inform setting of a new benchmark rate in England and Wales that will be of international relevance.Design Retrospective cohort study.Setting National Joint Registry.Participants 239 000 patient records.Main outcome measures Revision rates for five frequently used types of total hip replacement that differed according to bearing surface and fixation mode, encompassing 62% of all primary total hip replacements in the National Joint Registry for England and Wales. Revision rates were compared using Kaplan-Meier and competing risks analyses, and five and 10 year rates were estimated using well fitting parametric models.Results Estimated revision rates at 10 years were 4% or below for four of the five types of total hip replacement investigated. Rates differed little according to Kaplan-Meier or competing risks analysis, but differences between prosthesis types were more substantial. Cemented prostheses with ceramic-on-polyethylene bearing surfaces had the lowest revision rates (1.88-2.11% at 10 years depending on the method used), and cementless prostheses with ceramic-on-ceramic bearing surfaces had the highest revision rates (3.93-4.33%). Men were more likely to receive revision of total hip replacement than were women, and this difference was statistically significant for four of the five prosthesis types.Conclusions Ten year revision rate estimates were all less than 5%, and in some instances considerably less. The results suggest that the current revision rate benchmark should be at least halved from 10% to less than 5% at 10 years. This has implications for benchmarks internationally. IntroductionTotal hip replacement is a successful intervention for hip osteoarthritis. In the United States more than 300 000 total hip replacements were undertaken in 2010, 1 and in the United Kingdom about 80 000 are undertaken annually. 2 Many total hip replacement components exist. Surgeons in the United Kingdom can select from more than 150 different devices and combinations of components.3 Ageing of host bone, wear in bearing surfaces, and other contingencies mean that some total hip replacements need replacing during a patient's lifetime.Surgical revision is a complex and demanding procedure that is inconvenient, traumatic, and expensive. In the past, alarmingly high revision rates were documented for some total hip replacement designs. Catastrophic failure resulted in a 67% five year revision rate for one device. 4 The 3M Capital hip, implanted in more than 4000 patients in the United Kingdom from 1991, raised concerns in 1995, and a Department of Health hazard notice was issued in 1998. 5 The DePuy ASR device was recalled from the market in 2010 after more than 93 000 had been implanted worldwide.6 7 Concerns have been raised about devices with metal-on-metal bearing surfaces.8 9 In 2009 roughly a third of hip replacements in the United States were metal-on-metal. Such episodes highlight the need for monito...
BackgroundDiabetic retinopathy is an important cause of visual loss. Laser photocoagulation preserves vision in diabetic retinopathy but is currently used at the stage of proliferative diabetic retinopathy (PDR).ObjectivesThe primary aim was to assess the clinical effectiveness and cost-effectiveness of pan-retinal photocoagulation (PRP) given at the non-proliferative stage of diabetic retinopathy (NPDR) compared with waiting until the high-risk PDR (HR-PDR) stage was reached. There have been recent advances in laser photocoagulation techniques, and in the use of laser treatments combined with anti-vascular endothelial growth factor (VEGF) drugs or injected steroids. Our secondary questions were: (1) If PRP were to be used in NPDR, which form of laser treatment should be used? and (2) Is adjuvant therapy with intravitreal drugs clinically effective and cost-effective in PRP?Eligibility criteriaRandomised controlled trials (RCTs) for efficacy but other designs also used.Data sourcesMEDLINE and EMBASE to February 2014, Web of Science.Review methodsSystematic review and economic modelling.ResultsThe Early Treatment Diabetic Retinopathy Study (ETDRS), published in 1991, was the only trial designed to determine the best time to initiate PRP. It randomised one eye of 3711 patients with mild-to-severe NPDR or early PDR to early photocoagulation, and the other to deferral of PRP until HR-PDR developed. The risk of severe visual loss after 5 years for eyes assigned to PRP for NPDR or early PDR compared with deferral of PRP was reduced by 23% (relative risk 0.77, 99% confidence interval 0.56 to 1.06). However, the ETDRS did not provide results separately for NPDR and early PDR. In economic modelling, the base case found that early PRP could be more effective and less costly than deferred PRP. Sensitivity analyses gave similar results, with early PRP continuing to dominate or having low incremental cost-effectiveness ratio. However, there are substantial uncertainties. For our secondary aims we found 12 trials of lasers in DR, with 982 patients in total, ranging from 40 to 150. Most were in PDR but five included some patients with severe NPDR. Three compared multi-spot pattern lasers against argon laser. RCTs comparing laser applied in a lighter manner (less-intensive burns) with conventional methods (more intense burns) reported little difference in efficacy but fewer adverse effects. One RCT suggested that selective laser treatment targeting only ischaemic areas was effective. Observational studies showed that the most important adverse effect of PRP was macular oedema (MO), which can cause visual impairment, usually temporary. Ten trials of laser and anti-VEGF or steroid drug combinations were consistent in reporting a reduction in risk of PRP-induced MO.LimitationThe current evidence is insufficient to recommend PRP for severe NPDR.ConclusionsThere is, as yet, no convincing evidence that modern laser systems are more effective than the argon laser used in ETDRS, but they appear to have fewer adverse effects. We recommend a trial of PRP for severe NPDR and early PDR compared with deferring PRP till the HR-PDR stage. The trial would use modern laser technologies, and investigate the value adjuvant prophylactic anti-VEGF or steroid drugs.Study registrationThis study is registered as PROSPERO CRD42013005408.FundingThe National Institute for Health Research Health Technology Assessment programme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.