Chronic kidney disease (CKD) is a global health burden with a high economic cost to health systems and is an independent risk factor for cardiovascular disease (CVD). All stages of CKD are associated with increased risks of cardiovascular morbidity, premature mortality, and/or decreased quality of life. CKD is usually asymptomatic until later stages and accurate prevalence data are lacking. Thus we sought to determine the prevalence of CKD globally, by stage, geographical location, gender and age. A systematic review and meta-analysis of observational studies estimating CKD prevalence in general populations was conducted through literature searches in 8 databases. We assessed pooled data using a random effects model. Of 5,842 potential articles, 100 studies of diverse quality were included, comprising 6,908,440 patients. Global mean(95%CI) CKD prevalence of 5 stages 13·4%(11·7–15·1%), and stages 3–5 was 10·6%(9·2–12·2%). Weighting by study quality did not affect prevalence estimates. CKD prevalence by stage was Stage-1 (eGFR>90+ACR>30): 3·5% (2·8–4·2%); Stage-2 (eGFR 60–89+ACR>30): 3·9% (2·7–5·3%); Stage-3 (eGFR 30–59): 7·6% (6·4–8·9%); Stage-4 = (eGFR 29–15): 0·4% (0·3–0·5%); and Stage-5 (eGFR<15): 0·1% (0·1–0·1%). CKD has a high global prevalence with a consistent estimated global CKD prevalence of between 11 to 13% with the majority stage 3. Future research should evaluate intervention strategies deliverable at scale to delay the progression of CKD and improve CVD outcomes.
ObjeCtivesTo assess diagnostic accuracy of screening tests for pre-diabetes and efficacy of interventions (lifestyle or metformin) in preventing onset of type 2 diabetes in people with pre-diabetes. DesignSystematic review and meta-analysis. Data sOurCes anD methODMedline, PreMedline, and Embase. Study protocols and seminal papers were citation-tracked in Google Scholar to identify definitive trials and additional publications. Data on study design, methods, and findings were extracted onto Excel spreadsheets; a 20% sample was checked by a second researcher. Data extracted for screening tests included diagnostic accuracy and population prevalence. Two metaanalyses were performed, one summarising accuracy of screening tests (with the oral glucose tolerance test as the standard) for identification of pre-diabetes, and the other assessing relative risk of progression to type 2 diabetes after either lifestyle intervention or treatment with metformin.eligibility Criteria Empirical studies evaluating accuracy of tests for identification of pre-diabetes. Interventions (randomised trials and interventional studies) with a control group in people identified through screening. No language restrictions. results 2874 titles were scanned and 148 papers (covering 138 studies) reviewed in full. The final analysis included 49 studies of screening tests (five of which were prevalence studies) and 50 intervention trials. HbA 1c had a mean sensitivity of 0.49 (95% confidence interval 0.40 to 0.58) and specificity of 0.79 (0.73 to 0.84), for identification of pre-diabetes, though different studies used different cut-off values. Fasting plasma glucose had a mean sensitivity of 0.25 (0.19 to 0.32) and specificity of 0.94 (0.92 to 0.96). Different measures of glycaemic abnormality identified different subpopulations (for example, 47% of people with abnormal HbA 1c had no other glycaemic abnormality). Lifestyle interventions were associated with a 36% (28% to 43%) reduction in relative risk of type 2 diabetes over six months to six years, attenuating to 20% (8% to 31%) at follow-up in the period after the trails. COnClusiOnsHbA 1c is neither sensitive nor specific for detecting pre-diabetes; fasting glucose is specific but not sensitive. Interventions in people classified through screening as having pre-diabetes have some efficacy in preventing or delaying onset of type 2 diabetes in trial populations. As screening is inaccurate, many people will receives an incorrect diagnosis and be referred on for interventions while others will be falsely reassured and not offered the intervention. These findings suggest that "screen and treat" policies alone are unlikely to have substantial impact on the worsening epidemic of type 2 diabetes. registratiOn PROSPERO (No CRD42016042920).
BackgroundThe English NHS Diabetic Eye Screening Programme was established in 2003. Eligible people are invited annually for digital retinal photography screening. Those found to have potentially sight-threatening diabetic retinopathy (STDR) are referred to surveillance clinics or to Hospital Eye Services.ObjectivesTo determine whether personalised screening intervals are cost-effective.DesignRisk factors were identified in Gloucestershire, UK using survival modelling. A probabilistic decision hidden (unobserved) Markov model with a misgrading matrix was developed. This informed estimation of lifetime costs and quality-adjusted life-years (QALYs) in patients without STDR. Two personalised risk stratification models were employed: two screening episodes (SEs) (low, medium or high risk) or one SE with clinical information (low, medium–low, medium–high or high risk). The risk factor models were validated in other populations.SettingGloucestershire, Nottinghamshire, South London and East Anglia (all UK).ParticipantsPeople with diabetes in Gloucestershire with risk stratification model validation using data from Nottinghamshire, South London and East Anglia.Main outcome measuresPersonalised risk-based algorithm for screening interval; cost-effectiveness of different screening intervals.ResultsData were obtained in Gloucestershire from 12,790 people with diabetes with known risk factors to derive the risk estimation models, from 15,877 people to inform the uptake of screening and from 17,043 people to inform the health-care resource-usage costs. Two stratification models were developed: one using only results from previous screening events and one using previous screening and some commonly available GP data. Both models were capable of differentiating groups at low and high risk of development of STDR. The rate of progression to STDR was 5 per 1000 person-years (PYs) in the lowest decile of risk and 75 per 1000 PYs in the highest decile. In the absence of personalised risk stratification, the most cost-effective screening interval was to screen all patients every 3 years, with a 46% probability of this being cost-effective at a £30,000 per QALY threshold. Using either risk stratification models, screening patients at low risk every 5 years was the most cost-effective option, with a probability of 99-100% at a £30,000 per QALY threshold. For the medium-risk groups screening every 3 years had a probability of 43 –48% while screening high-risk groups every 2 years was cost-effective with a probability of 55–59%.ConclusionsThe study found that annual screening of all patients for STDR was not cost-effective. Screening this entire cohort every 3 years was most likely to be cost-effective. When personalised intervals are applied, screening those in our low-risk groups every 5 years was found to be cost-effective. Screening high-risk groups every 2 years further improved the cost-effectiveness of the programme. There was considerable uncertainty in the estimated incremental costs and in the incremental QALYs, particularly with regard to implications of an increasing proportion of maculopathy cases receiving intravitreal injection rather than laser treatment. Future work should focus on improving the understanding of risk, validating in further populations and investigating quality issues in imaging and assessment including the potential for automated image grading.Study registrationIntegrated Research Application System project number 118959.Funding detailsThe National Institute for Health Research Health Technology Assessment programme.
BackgroundFailure to take medication reduces the effectiveness of treatment leading to increased morbidity and mortality. We evaluated the efficacy of a consultation-based intervention to support objectively-assessed adherence to oral glucose lowering medication (OGLM) compared to usual care among people with type 2 diabetes.MethodsThis was a parallel group randomised trial in adult patients with type 2 diabetes and HbA1c≥7.5% (58 mmol/mol), prescribed at least one OGLM. Participants were allocated to a clinic nurse delivered, innovative consultation-based intervention to strengthen patient motivation to take OGLM regularly and support medicine taking through action-plans, or to usual care. The primary outcome was the percentage of days on which the prescribed dose of medication was taken, measured objectively over 12 weeks with an electronic medication-monitoring device (TrackCap, Aardex, Switzerland). The primary analysis was intention-to-treat.Results211 patients were randomised between July 1, 2006 and November 30, 2008 in 13 British general practices (primary care clinics). Primary outcome data were available for 194 participants (91.9%). Mean (sd) percentage of adherent days was 77.4% (26.3) in the intervention group and 69.0% (30.8) in standard care (mean difference between groups 8.4%, 95% confidence interval 0.2% to 16.7%, p = 0.044). There was no significant adverse impact on functional status or treatment satisfaction.ConclusionsThis well-specified, theory based intervention delivered in a single session of 30 min in primary care increased objectively measured medication adherence, with no adverse effect on treatment satisfaction. These findings justify a definitive trial of this approach to improving medication adherence over a longer period of time, with clinical and cost-effectiveness outcomes to inform clinical practice.Trial registrationCurrent Controlled Trials ISRCTN30522359
BackgroundThe SARS-CoV-2 pandemic has passed its first peak in Europe.AimTo describe the mortality in England and its association with SARS-CoV-2 status and other demographic and risk factors.Design and settingCross-sectional analyses of people with known SARS-CoV-2 status in the Oxford RCGP Research and Surveillance Centre (RSC) sentinel network.MethodPseudonymised, coded clinical data were uploaded from volunteer general practice members of this nationally representative network ( n = 4 413 734). All-cause mortality was compared with national rates for 2019, using a relative survival model, reporting relative hazard ratios (RHR), and 95% confidence intervals (CI). A multivariable adjusted odds ratios (OR) analysis was conducted for those with known SARS-CoV-2 status ( n = 56 628, 1.3%) including multiple imputation and inverse probability analysis, and a complete cases sensitivity analysis.ResultsMortality peaked in week 16. People living in households of ≥9 had a fivefold increase in relative mortality (RHR = 5.1, 95% CI = 4.87 to 5.31, P<0.0001). The ORs of mortality were 8.9 (95% CI = 6.7 to 11.8, P<0.0001) and 9.7 (95% CI = 7.1 to 13.2, P<0.0001) for virologically and clinically diagnosed cases respectively, using people with negative tests as reference. The adjusted mortality for the virologically confirmed group was 18.1% (95% CI = 17.6 to 18.7). Male sex, population density, black ethnicity (compared to white), and people with long-term conditions, including learning disability (OR = 1.96, 95% CI = 1.22 to 3.18, P = 0.0056) had higher odds of mortality.ConclusionThe first SARS-CoV-2 peak in England has been associated with excess mortality. Planning for subsequent peaks needs to better manage risk in males, those of black ethnicity, older people, people with learning disabilities, and people who live in multi-occupancy dwellings.
Summary Background Faecal immunochemical testing (FIT) is recommended by the National Institute for Health and Care Excellence (NICE) to triage symptomatic primary care patients for further investigation of colorectal cancer. Aim To ascertain the diagnostic performance of FIT in symptomatic adult primary care patients. Methods Faecal samples from routine primary care practice in Oxfordshire, UK were analysed using the HM‐JACKarc FIT method between March 2017 and March 2020. Clinical details were recorded. Patients were followed for up to 36 months in linked hospital records for evidence of benign and serious (colorectal cancer, high‐risk adenomas and bowel inflammation) colorectal disease. The diagnostic accuracy of FIT is reported by gender, age group and FIT threshold. Results In 9896 adult patients with at least 6‐month follow‐up, a FIT result ≥10 µg Hb/g faeces had a sensitivity for colorectal cancer of 90.5% (95% CI 84.9%‐96.1%), specificity 91.3% (90.8%‐91.9%), positive predictive value (PPV) 10.1% (8.15%‐12.0%) and negative predictive value (NPV) 99.9% (99.8%‐100.0%). The PPV and specificity for serious colorectal disease were higher and the sensitivity and NPV lower than for colorectal cancer alone. The area under the curve for all adults did not change substantially by gender or by increasing the minimum age of testing. Using ≥10 µg Hb/g faeces, 10% of adults would be investigated to detect 91% of cancers, a number needed to scope of ten to detect one cancer. Using ≥7, ≥50 and ≥150 µg Hb/g faeces, 11%, 4% and 3% of adults would be investigated, and 91%, 74% and 54% cancers detected, respectively. Conclusion A FIT threshold of ≥10 µg Hb/g faeces would be appropriate to triage adult patients presenting to primary care with symptoms of serious colorectal disease. FIT may be used to reprioritise patients referred with colorectal cancer symptoms whose investigations have been delayed by the COVID‐19 pandemic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.