BackgroundThe dramatic rise in chronically ill patients on permanent disability benefits threatens the sustainability of social security in high-income countries. Social insurance organizations have started to invest in promising, but costly return to work (RTW) coordination programmes. The benefit, however, remains uncertain. We conducted a systematic review to determine the long-term effectiveness of RTW coordination compared to usual practice in patients at risk for long-term disability.Methods and FindingsEligible trials enrolled employees on work absence for at least 4 weeks and randomly assigned them to RTW coordination or to usual practice. We searched 5 databases (to April 2, 2012). Two investigators performed standardised eligibility assessment, study appraisal and data extraction independently and in duplicate. The GRADE framework guided our assessment of confidence in the meta-analytic estimates. We identified 9 trials from 7 countries, 8 focusing on musculoskeletal, and 1 on mental complaints. Most trials followed participants for 12 months or less. No trial assessed permanent disability. Moderate quality evidence suggests a benefit of RTW coordination on proportion at work at end of follow-up (risk ratio = 1.08, 95% CI = 1.03 to 1.13; absolute effect = 5 in 100 additional individuals returning to work, 95% CI = 2 to 8), overall function (mean difference [MD] on a 0 to 100 scale = 5.2, 95% CI = 2.4 to 8.0; minimal important difference [MID] = 10), physical function (MD = 5.3, 95% CI = 1.4 to 9.1; MID = 8.4), mental function (MD = 3.1, 95% CI = 0.7 to 5.6; MID = 7.3) and pain (MD = 6.1, 95% CI = 3.1 to 9.2; MID = 10).ConclusionsModerate quality evidence suggests that RTW coordination results in small relative, but likely important absolute benefits in the likelihood of disabled or sick-listed patients returning to work, and associated small improvements in function and pain. Future research should explore whether the limited effects persist, and whether the programmes are cost effective in the long term.
Objectives To explore agreement among healthcare professionals assessing eligibility for work disability benefits.Design Systematic review and narrative synthesis of reproducibility studies.Data sources Medline, Embase, and PsycINFO searched up to 16 March 2016, without language restrictions, and review of bibliographies of included studies.Eligibility criteria Observational studies investigating reproducibility among healthcare professionals performing disability evaluations using a global rating of working capacity and reporting inter-rater reliability by a statistical measure or descriptively. Studies could be conducted in insurance settings, where decisions on ability to work include normative judgments based on legal considerations, or in research settings, where decisions on ability to work disregard normative considerations.Teams of paired reviewers identified eligible studies, appraised their methodological quality and generalisability, and abstracted results with pretested forms. As heterogeneity of research designs and findings impeded a quantitative analysis, a descriptive synthesis stratified by setting (insurance or research) was performed.Results From 4562 references, 101 full text articles were reviewed. Of these, 16 studies conducted in an insurance setting and seven in a research setting, performed in 12 countries, met the inclusion criteria. Studies in the insurance setting were conducted with medical experts assessing claimants who were actual disability claimants or played by actors, hypothetical cases, or short written scenarios. Conditions were mental (n=6, 38%), musculoskeletal (n=4, 25%), or mixed (n=6, 38%). Applicability of findings from studies conducted in an insurance setting to real life evaluations ranged from generalisable (n=7, 44%) and probably generalisable (n=3, 19%) to probably not generalisable (n=6, 37%). Median inter-rater reliability among experts was 0.45 (range intraclass correlation coefficient 0.86 to κ−0.10). Inter-rater reliability was poor in six studies (37%) and excellent in only two (13%). This contrasts with studies conducted in the research setting, where the median inter-rater reliability was 0.76 (range 0.91-0.53), and 71% (5/7) studies achieved excellent inter-rater reliability. Reliability between assessing professionals was higher when the evaluation was guided by a standardised instrument (23 studies, P=0.006). No such association was detected for subjective or chronic health conditions or the studies’ generalisability to real world evaluation of disability (P=0.46, 0.45, and 0.65, respectively).Conclusions Despite their common use and far reaching consequences for workers claiming disabling injury or illness, research on the reliability of medical evaluations of disability for work is limited and indicates high variation in judgments among assessing professionals. Standardising the evaluation process could improve reliability. Development and testing of instruments and structured approaches to improve reliability in evaluation of disability are urgently ...
RTW efforts are assessed in half of the participating European countries. When compared, the characteristics of the assessment of RTW efforts in the participating European countries show both similarities and differences. This study may facilitate the gathering and exchange of knowledge and experience between countries on the assessment of RTW efforts.
Background Expert psychiatrists conducting work disability evaluations often disagree on work capacity (WC) when assessing the same patient. More structured and standardised evaluations focusing on function could improve agreement. The RELY studies aimed to establish the inter-rater reproducibility (reliability and agreement) of ‘functional evaluations’ in patients with mental disorders applying for disability benefits and to compare the effect of limited versus intensive expert training on reproducibility. Methods We performed two multi-centre reproducibility studies on standardised functional WC evaluation (RELY 1 and 2). Trained psychiatrists interviewed 30 and 40 patients respectively and determined WC using the Instrument for Functional Assessment in Psychiatry (IFAP). Three psychiatrists per patient estimated WC from videotaped evaluations. We analysed reliability (intraclass correlation coefficients [ICC]) and agreement (‘standard error of measurement’ [SEM] and proportions of comparisons within prespecified limits) between expert evaluations of WC. Our primary outcome was WC in alternative work (WC alternative.work ), 100–0%. Secondary outcomes were WC in last job (WC last.job ), 100–0%; patients’ perceived fairness of the evaluation, 10–0, higher is better; usefulness to psychiatrists. Results Inter-rater reliability for WC alternative.work was fair in RELY 1 (ICC 0.43; 95%CI 0.22–0.60) and RELY 2 (ICC 0.44; 0.25–0.59). Agreement was low in both studies, the ‘standard error of measurement’ for WC alternative.work was 24.6 percentage points (20.9–28.4) and 19.4 (16.9–22.0) respectively. Using a ‘maximum acceptable difference’ of 25 percentage points WC alternative.work between two experts, 61.6% of comparisons in RELY 1, and 73.6% of comparisons in RELY 2 fell within these limits. Post-hoc secondary analysis for RELY 2 versus RELY 1 showed a significant change in SEM alternative.work (− 5.2 percentage points WC alternative.work [95%CI − 9.7 to − 0.6]), and in the proportions on the differences ≤ 25 percentage points WC alternative.work between two experts ( p = 0.008). Patients perceived the functional evaluation as fair (RELY 1: mean 8.0; RELY 2: 9.4), psychiatrists as useful. Conclusions Evidence from non-randomised studies suggests that intensive training in functional evaluation may increase agreement on WC between experts, but fell short to reach stakeholders’ expectations. It did not alter reliability. Isolated efforts in training psychiatrists may not suffice to reach the expected level of agreement. A societal discussion about achievable goals and readiness to consider procedural changes in WC evaluations may deserve considerations. Electron...
BackgroundMedical work capacity evaluations play a key role in social security schemes because they usually form the basis for eligibility decisions regarding disability benefits. However, the evaluations are often poorly standardized and lack transparency as decisions on work capacity are based on a claimant’s disease rather than on his or her functional capacity. A comprehensive and consistent illustration of a claimant’s lived experience in relation to functioning, applying the International Classification of Functioning, Disability and Health (ICF) and the ICF Core Sets (ICF-CS), potentially enhances transparency and standardization of work capacity evaluations. In our study we wanted to establish whether and how the relevant content of work capacity evaluations can be captured by ICF-CS, using disability claimants with chronic widespread pain (CWP) and low back pain (LBP) as examples.MethodsMixed methods study, involving a qualitative and quantitative content analysis of medical reports. The ICF was used for data coding. The coded categories were ranked according to the percentage of reports in which they were addressed. Relevance thresholds at 25% and 50% were applied. To determine the extent to which the categories above the thresholds are represented by applicable ICF-CS or combinations thereof, measures of the ICF-CS’ degree of coverage (i.e. content validity) and efficiency (i.e. practicability) were defined.ResultsFocusing on the 25% threshold and combining the Brief ICF-CS for CWP, LBP and depression for CWP reports, the coverage ratio reached 49% and the efficiency ratio 70%. Combining the Brief ICF-CS for LBP, CWP and obesity for LBP reports led to a coverage of 47% and an efficiency of 78%.ConclusionsThe relevant content of work capacity evaluations involving CWP and LBP can be represented by a combination of applicable ICF-CS. A suitable standard for documenting such evaluations could consist of the Brief ICF-CS for CWP, LBP, and depression or obesity, augmented by additional ICF categories relevant for this particular context. In addition, the unique individual experiences of claimants have to be considered in order to assess work capacity comprehensively.
The results suggest that assessments of disability are largely based on the initial representation that is formed after reading the file. The main pitfall is that the final representation is based on general beliefs rather than on actual client information. For training and support this would mean that doctors should be made aware of the extent to which their assessment is anchored in the case at hand.
Background: Assessments for long-term incapacity for work are performed by Social Insurance Physicians (SIPs) who rely on interviews with claimants as an important part of the process. These interviews are susceptible to bias. In the Netherlands three protocols have been developed to conduct these interviews. These protocols are expert-and practice-based. We studied to what extent these protocols are adhered to by practitioners.
Background: In social insurance, the evaluation of work disability is becoming stricter as priority is given to the resumption of work, which calls for a guarantee of quality for these evaluations. Evidence-based guidelines have become a major instrument in the quality control of health care, and the quality of these guidelines' development can be assessed using the AGREE instrument. In social insurance medicine, such guidelines are relatively new. We were interested to know what guidelines have been developed to support the medical evaluation of work disability and the quality of these guidelines.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.