Rationale: To improve the effectiveness of tuberculosis (TB) control programs in the United States by identifying cost-effective priorities for screening for latent tuberculosis infection (LTBI). Objectives: To estimate the cost-effectiveness of LTBI screening using the tuberculin skin test (TST) and interferon-g release assays (IGRAs). Methods: A Markov model of screening for LTBI with TST and IGRA in risk-groups considered in current LTBI screening guidelines. Measurements and Main Results: In all risk-groups, TST and IGRA screening resulted in increased mean life expectancy, ranging from 0.03-0.24 life-months per person screened. IGRA screening resulted in greater life expectancy gains than TST. Screening always cost more than not screening, but IGRA was cost-saving compared with TST in some groups. Four patterns of cost-effectiveness emerged, related to four risk categories. (1) Individuals at highest risk of TB reactivation (close contacts and those infected with HIV): the incremental cost-effectiveness ratio (ICER) of IGRA compared with TST was less than $100,000 per quality-adjusted life year (QALY) gained.(2) The foreign-born: IGRA was cost-saving compared with TST and cost-effective compared with no screening (ICER ,$100,000 per QALY gained). (3) Vulnerable populations (e.g., homeless, drug user, or former prisoner): the ICER of TST screening was approximately $100,000-$150,000 per QALY gained, but IGRA was not cost-effective. (4) Medical comorbidities (e.g., diabetes): the ICER of screening with TST or IGRA was greater than $100,000 per QALY. Conclusions: LTBI screening guidelines could make progress toward TB elimination by prioritizing screening for close contacts, those infected with HIV, and the foreign-born regardless of time living in the United States. For these groups, IGRA screening was more costeffective than TST screening.Keywords: latent tuberculosis; cost-effectiveness; tuberculin skin test; interferon-g release assay Reactivation of latent tuberculosis infection (LTBI) accounts for approximately 70% of cases of active tuberculosis (TB) in the United States (1, 2). Screening and treatment for LTBI is therefore a cornerstone of the strategy for the elimination of TB disease in the United States (3, 4). Previous studies have examined priorities for LTBI screening and treatment, and several have found that isoniazid (INH) therapy for low-risk tuberculin reactors is cost-effective, and even cost-saving in some populations (5-8). These studies, however, used estimates of the prevalence of LTBI and rates of reactivation TB observed in the 1950s and 1960s, and may not reflect current epidemiologic trends (9-11). Furthermore, given the development of interferon-g release assays (IGRA) as a screening test for LTBI, it is important to expand the investigation to compare the effectiveness and cost-effectiveness of both tuberculin skin test (TST) and IGRA screening (4). Although prior studies have investigated the cost-effectiveness of IGRA, they focused on select risk-groups, and did not prioritize s...
Recognizing the importance of timely guidance regarding the rapidly evolving field of hepatitis C management, the American Association for the Study of Liver Diseases (AASLD) and the Infectious Diseases Society of America (IDSA) developed a web-based process for the expeditious formulation and dissemination of evidence-based recommendations. Launched in 2014, the hepatitis C virus (HCV) guidance website undergoes periodic updates as necessitated by availability of new therapeutic agents and/or research data. A major update was released electronically in September 2017, prompted primarily by approval of new direct-acting antiviral agents and expansion of the guidance's scope. This update summarizes the latest release of the HCV guidance and focuses on new or amended recommendations since the previous September 2015 print publication. The recommendations herein were developed by volunteer hepatology and infectious disease experts representing AASLD and IDSA and have been peer reviewed and approved by each society's governing board.
We investigated prescribing patterns for four opioid use disorder (OUD) medications: 1) injectable naltrexone, 2) oral naltrexone, 3) sublingual or oralmucosal buprenorphine/naloxone, and 4) sublingual buprenorphine as well as transdermal buprenorphine (which is approved for treating pain, but not OUD) in a nationally representative claims-based database (Truven Health MarketScan®) of commercially insured individuals in the United States. We calculated the prevalence of OUD in the database for each year from 2010 to 2014 and the proportion of diagnosed patient months on OUD medication. We compared characteristics of individuals diagnosed with OUD who did and did not receive these medications with bivariate descriptive statistics. Finally, we fit a Cox proportional hazards model of time to discontinuation of therapy as a function of therapy type, controlling for relevant confounders. From 2010 to 2014, the proportion of commercially insured individuals diagnosed with OUD grew by fourfold (0.12% to 0.48%), but the proportion of diagnosed patient-months on medication decreased from 25% in 2010 (0.05% injectable naltrexone, 0.4% oral naltrexone, 23.1% sublingual or oralmucosal buprenorphine/naloxone, 1.5% sublingual buprenorphine, and 0% transdermal buprenorphine) to 16% in 2014 (0.2% injectable naltrexone, 0.4% oral naltrexone, 13.8% sublingual or oralmucosal buprenorphine/naloxone, 1.4% sublingual buprenorphine, and 0.3% transdermal buprenorphine). Individuals who received medication therapy were more likely to be male, younger, and have an additional substance use disorder compared with those diagnosed with OUD who did not receive medication therapy. Those prescribed injectable naltrexone were more often male, younger, and diagnosed with additional substance use disorders compared with those prescribed other medications for opioid use disorder (MOUDs). At 30 days after initiation, 52% for individuals treated with injectable naltrexone, 70% for individuals treated with oral naltrexone, 31% for individuals treated with sublingual or oralmucosal buprenorphine/naloxone, 58% for individuals treated with sublingual buprenorphine, and 51% for individuals treated with transdermal buprenorphine discontinued treatment. In the Cox proportional hazard model, use of injectable naltrexone, oral naltrexone, sublingual buprenorphine, and transdermal buprenorphine were all associated with significantly greater hazard of discontinuing therapy beginning >30days after MOUD initiation (HR=2.17, 2.54, 1.15, and 2.21, respectively, 95% CIs 2.04-2.30, 2.45-2.64, 1.10-1.19, and 2.11-2.33), compared with the use of sublingual or oralmucosal buprenorphine/naloxone. This analysis demonstrates that the use of evidence-based medication therapies has not kept pace with increases in OUD diagnoses in commercially insured populations in the United States. Among those who have been treated, discontinuation rates >30days after initiation are high. The proportion treated with injectable naltrexone, oral naltrexone, and transdermal buprenorphine grew ...
Background Diagnosis of chronic Hepatitis C Virus (HCV) infection requires both a positive HCV antibody screen and confirmatory nucleic acid test (NAT). HCV core antigen (HCVcAg) is a potential alternative to NAT. Purpose This systematic review evaluated the accuracy of diagnosis of active HCV infection among adults and children for five HCVcAg tests compared to NAT. Data Sources EMBASE, PubMed, Web of Science, Scopus, and Cochrane from 1990 through March 31, 2016. Study Selection Cohort, cross-sectional, and randomized controlled trials were included without language restriction Data Extraction Two independent reviewers extracted data and assessed quality using an adapted Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Data Synthesis 44 studies evaluated 5 index tests. Studies for the ARCHITECT had the highest quality, while those for Ortho ELISA were the lowest. From bivariate analyses, the sensitivity and specificity with 95% CI were: ARCHITECT 93.4% (90.1, 96.4) and 98.8% (97.4, 99.5), Ortho ELISA 93.2% (81.6, 97.7) and 99.2% (87.9, 100), and Hunan Jynda 59.5% (46.0, 71.7) and 82.9% (58.6, 94.3). Insufficient data were available for a meta-analysis for Lumipulse and Lumispot. In three quantitative studies using ARCHITECT, HCVcAg correlated closely with HCV RNA above 3000 IU/mL. Limitations There was insufficient data on covariates such as HIV or HBV status for sub-group analyses. Few studies reported genotypes of isolates and there were scant data for genotypes 4, 5, and 6. Most studies were conducted in high resource settings within reference laboratories. Conclusions HCVcAg assays with signal amplification have high sensitivity, high specificity, and good correlation with HCV RNA above 3000 IU/mL. HCVcAg assays have the potential to replace NAT in high HCV prevalence settings.
Significance This paper compares the probabilistic accuracy of short-term forecasts of reported deaths due to COVID-19 during the first year and a half of the pandemic in the United States. Results show high variation in accuracy between and within stand-alone models and more consistent accuracy from an ensemble model that combined forecasts from all eligible models. This demonstrates that an ensemble model provided a reliable and comparatively accurate means of forecasting deaths during the COVID-19 pandemic that exceeded the performance of all of the models that contributed to it. This work strengthens the evidence base for synthesizing multiple models to support public-health action.
Background Chronic infection with hepatitis C virus (HCV) genotype 2 or 3 can be treated with sofosbuvir without interferon. Because sofosbuvir is costly, its benefits should be compared with the additional resources used. Objective To estimate the cost-effectiveness of sofosbuvir-based treatments for HCV genotype 2 or 3 infection in the United States. Design Monte Carlo simulation, including deterministic and probabilistic sensitivity analyses. Data Sources Randomized trials, observational cohorts, and national health care spending surveys. Target Population 8 patient types defined by HCV genotype (2 vs. 3), treatment history (naive vs. experienced), and cirrhosis status (noncirrhotic vs. cirrhotic). Time Horizon Lifetime. Perspective Payer. Intervention Sofosbuvir-based therapies, pegylated interferon–ribavirin, and no therapy. Outcome Measures Discounted quality-adjusted life-years (QALYs), costs, and incremental cost-effectiveness ratios (ICERs). Results of Base-Case Analysis The ICER of sofosbuvir-based treatment was less than $100 000 per QALY in cirrhotic patients (genotype 2 or 3 and treatment-naive or treatment-experienced) and in treatment-experienced noncirrhotic patients but was greater than $200 000 per QALY in treatment-naive noncirrhotic patients. Results of Sensitivity Analysis The ICER of sofosbuvir-based therapy for treatment-naive noncirrhotic patients with genotype 2 or 3 infection was less than $100 000 per QALY when the cost of sofosbuvir was reduced by approximately 40% and 60%, respectively. In probabilistic sensitivity analyses, cost-effectiveness conclusions were robust to uncertainty in treatment efficacy. Limitation The analysis did not consider possible benefits of preventing HCV transmission. Conclusion Sofosbuvir provides good value for money for treatment-experienced patients with HCV genotype 2 or 3 infection and those with cirrhosis. At their current cost, sofosbuvir-based regimens for treatment-naive noncirrhotic patients exceed willingness-to-pay thresholds commonly cited in the United States. Primary Funding Source National Institute on Drug Abuse and National Institute of Allergy and Infectious Diseases.
BackgroundAs highly effective hepatitis C virus (HCV) therapies emerge, data are needed to inform the development of interventions to improve HCV treatment rates. We used simulation modeling to estimate the impact of loss to follow-up on HCV treatment outcomes and to identify intervention strategies likely to provide good value for the resources invested in them.MethodsWe used a Monte Carlo state-transition model to simulate a hypothetical cohort of chronically HCV-infected individuals recently screened positive for serum HCV antibody. We simulated four hypothetical intervention strategies (linkage to care; treatment initiation; integrated case management; peer navigator) to improve HCV treatment rates, varying efficacies and costs, and identified strategies that would most likely result in the best value for the resources required for implementation.Main measuresSustained virologic responses (SVRs), life expectancy, quality-adjusted life expectancy (QALE), costs from health system and program implementation perspectives, and incremental cost-effectiveness ratios (ICERs).ResultsWe estimate that imperfect follow-up reduces the real-world effectiveness of HCV therapies by approximately 75%. In the base case, a modestly effective hypothetical peer navigator program maximized the number of SVRs and QALE, with an ICER compared to the next best intervention of $48,700/quality-adjusted life year. Hypothetical interventions that simultaneously addressed multiple points along the cascade provided better outcomes and more value for money than less costly interventions targeting single steps. The 5-year program cost of the hypothetical peer navigator intervention was $14.5 million per 10,000 newly diagnosed individuals.ConclusionsWe estimate that imperfect follow-up during the HCV cascade of care greatly reduces the real-world effectiveness of HCV therapy. Our mathematical model shows that modestly effective interventions to improve follow-up would likely be cost-effective. Priority should be given to developing and evaluating interventions addressing multiple points along the cascade rather than options focusing solely on single points.
IMPORTANCE Testing for and treating latent tuberculosis infection (LTBI) is among the main strategies to achieve TB elimination in the United States. The best approach to testing among non-US born residents, particularly those with comorbid conditions, is uncertain.OBJECTIVE To estimate health outcomes, costs, and cost-effectiveness of LTBI testing and treatment among non-US born residents with and without medical comorbidities. DESIGN, SETTING, AND PARTICIPANTS Decision analytic tree and Markov cohort simulation model among non-US born residents with no comorbidities, with diabetes, with HIV infection, or with end-stage renal disease (ESRD) using a health care sector perspective with 3% annual discounting. Strategies compared included no testing, tuberculin skin test (TST), interferon gamma release assay (IGRA), confirm positive (initial TST, IGRA only for TST-positive results; both tests positive indicates LTBI), and confirm negative (initial IGRA, then TST for IGRA-negative; any test positive indicates LTBI). All strategies were coupled to treatment with 3 months of self-administered rifapentine and isoniazid. MAIN OUTCOMES AND MEASURESNumber needed to test and treat to prevent 1 case of TB reactivation, discounted quality-adjusted life-years (QALYs), discounted lifetime medical costs, and incremental cost-effectiveness ratios (ICERs).RESULTS Improving health outcomes increased costs, with choice of test dependent on willingness to pay. Strategies ranked by ascending costs and benefits: no testing, confirm positive, TST, IGRA, and confirm negative. The ICERs varied by non-US born patient risk group: patients with no comorbidities, IGRA
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.