Providing a good patient experience is a key part of providing high-quality medical care. This paper explains why patient experience is important in its own right, and its relationship to other domains of quality. We describe methods of measuring patient experience, including issues relating to validity, reliability and response bias. Differences in reported patient experience may sometimes reflect differences in expectations of different population groups and we describe the arguments for and against adjusting patient experience data for population characteristics. As with other quality improvement strategies, feeding back patient experience data on its own is unlikely to improve quality: sustained and multiple interventions are usually required to deliver sustained improvements in care. Key Points for Decision MakersQuality is a multi-dimensional concept, and a single indicator does not (and should not) reflect quality in other domains. Patient experience is important in its own right.Patient experience is consistently and positively associated with other quality outcomes including patient safety and clinical effectiveness across a wide range of studies, and healthcare facilities providing high-quality clinical care tend to have better experiences reported by patients. However, these associations are frequently modest in size. Clinical quality and patient experience should be considered as distinct but inter-related aspects of quality.Differences in patient experience scores between population (e.g. ethnic) groups may in part reflect differences in expectations.Adjusting patient experience scores for population characteristics (e.g. ethnicity, deprivation) increases the acceptability of the results to healthcare providers. Case-mix adjustment in general makes only small differences to scores, though the greatest positive effect is on providers serving vulnerable populations.Significant quality improvement in general requires multiple strategies which are sustained over time. The same is probably true when using patient experience measures as a quality improvement tool. Simple feedback is unlikely to produce significant improvements in care.
ObjectiveTo describe the accuracy of ethnicity coding in contemporary National Health Service (NHS) hospital records compared with the ‘gold standard’ of self-reported ethnicity.DesignSecondary analysis of data from a cross-sectional survey (2011).SettingAll NHS hospitals in England providing cancer treatment.Participants58 721 patients with cancer for whom ethnicity information (Office for National Statistics 2001 16-group classification) was available from self-reports (considered to represent the ‘gold standard’) and their hospital record.MethodsWe calculated the sensitivity and positive predictive value (PPV) of hospital record ethnicity. Further, we used a logistic regression model to explore independent predictors of discordance between recorded and self-reported ethnicity.ResultsOverall, 4.9% (4.7–5.1%) of people had their self-reported ethnic group incorrectly recorded in their hospital records. Recorded White British ethnicity had high sensitivity (97.8% (97.7–98.0%)) and PPV (98.1% (98.0–98.2%)) for self-reported White British ethnicity. Recorded ethnicity information for the 15 other ethnic groups was substantially less accurate with 41.2% (39.7–42.7%) incorrect. Recorded ‘Mixed’ ethnicity had low sensitivity (12–31%) and PPVs (12–42%). Recorded ‘Indian’, ‘Chinese’, ‘Black-Caribbean’ and ‘Black African’ ethnic groups had intermediate levels of sensitivity (65–80%) and PPV (80–89%, respectively). In multivariable analysis, belonging to an ethnic minority group was the only independent predictor of discordant ethnicity information. There was strong evidence that the degree of discordance of ethnicity information varied substantially between different hospitals (p<0.0001).DiscussionCurrent levels of accuracy of ethnicity information in NHS hospital records support valid profiling of White/non-White ethnic differences. However, profiling of ethnic differences in process or outcome measures for specific minority groups may contain a substantial and variable degree of misclassification error. These considerations should be taken into account when interpreting ethnic variation audits based on routine data and inform initiatives aimed at improving the accuracy of ethnicity information in hospital records.
Background Core outcome sets (COS) prioritise outcomes based on their importance to key stakeholders, reduce reporting bias and increase comparability across studies. The first phase of a COS study is to form a ‘long-list’ of outcomes. Key stakeholders then decide on their importance. COS reporting is described as suboptimal and this first phase is often under-reported. Our objective was to develop a ‘long-list’ of outcome items for non-pharmacological interventions for people with dementia living at home. Methods Three iterative phases were conducted. First, people living with dementia, care partners, health and social care professionals, policymakers and researchers ( n = 55) took part in interviews or focus groups and were asked which outcomes were important. Second, existing dementia trials were identified from the ALOIS database. 248 of 1009 pharmacological studies met the inclusion criteria. Primary and secondary outcomes were extracted from a 50% random sample ( n = 124) along with eight key reviews/qualitative papers and 38 policy documents. Third, extracted outcome items were translated onto an existing qualitative framework and mapped into domains. The research team removed areas of duplication and refined the ‘long-list’ in eight workshops. Results One hundred seventy outcome items were extracted from the qualitative data and literature. The 170 outcome items were consolidated to 54 in four domains (Self-Managing Dementia Symptoms, Quality of Life, Friendly Neighbourhood & Home, Independence). Conclusions This paper presents a transparent blueprint for ‘long-list’ development. Though a useful resource in their own right, the 54 outcome items will be distilled further in a modified Delphi survey and consensus meeting to identify core outcomes.
BackgroundThere has been an increased focus towards improving quality of care within the NHS in the last 15 years; as part of this, there has been an emphasis on the importance of patient feedback within policy, through National Service Frameworks and the Quality and Outcomes Framework. The development and administration of large-scale national patient surveys to gather representative data on patient experience, such as the national GP Patient Survey in primary care, has been one such initiative. However, it remains unclear how the survey is used by patients and what impact the data may have on practice.ObjectivesOur research aimed to gain insight into how different patients use surveys to record experiences of general practice; how primary care staff respond to feedback; and how to engage primary care staff in responding to feedback.MethodsWe used methods including quantitative survey analyses, focus groups, interviews, an exploratory trial and an experimental vignette study.Results(1)Understanding patient experience data. Patients readily criticised their care when reviewing consultations on video, although they were reluctant to be critical when completing questionnaires. When trained raters judged communication during a consultation to be poor, a substantial proportion of patients rated the doctor as ‘good’ or ‘very good’. Absolute scores on questionnaire surveys should be treated with caution; they may present an overoptimistic view of general practitioner (GP) care. However, relative rankings to identify GPs who are better or poorer at communicating may be acceptable, as long as statistically reliable figures are obtained. Most patients have a particular GP whom they prefer to see; however, up to 40% of people who have such a preference are unable regularly to see the doctor of their choice. Users of out-of-hours care reported worse experiences when the service was run by a commercial provider than when it was run by a not-for profit or NHS provider. (2)Understanding patient experience in minority ethnic groups. Asian respondents to the GP Patient Survey tend to be registered with practices with generally low scores, explaining about half of the difference in the poorer reported experiences of South Asian patients than white British patients. We found no evidence that South Asian patients used response scales differently. When viewing the same consultation in an experimental vignette study, South Asian respondents gave higher scores than white British respondents. This suggests that the low scores given by South Asian respondents in patient experience surveys reflect care that is genuinely worse than that experienced by their white British counterparts. We also found that service users of mixed or Asian ethnicity reported lower scores than white respondents when rating out-of-hours services. (3)Using patient experience data. We found that measuring GP–patient communication at practice level masks variation between how good individual doctors are within a practice. In general practices and in out-of-hours centres, staff were sceptical about the value of patient surveys and their ability to support service reconfiguration and quality improvement. In both settings, surveys were deemed necessary but not sufficient. Staff expressed a preference for free-text comments, as these provided more tangible, actionable data. An exploratory trial of real-time feedback (RTF) found that only 2.5% of consulting patients left feedback using touch screens in the waiting room, although more did so when reminded by staff. The representativeness of responding patients remains to be evaluated. Staff were broadly positive about using RTF, and practices valued the ability to include their own questions. Staff benefited from having a facilitated session and protected time to discuss patient feedback.ConclusionsOur findings demonstrate the importance of patient experience feedback as a means of informing NHS care, and confirm that surveys are a valuable resource for monitoring national trends in quality of care. However, surveys may be insufficient in themselves to fully capture patient feedback, and in practice GPs rarely used the results of surveys for quality improvement. The impact of patient surveys appears to be limited and effort should be invested in making the results of surveys more meaningful to practice staff. There were several limitations of this programme of research. Practice recruitment for our in-hours studies took place in two broad geographical areas, which may not be fully representative of practices nationally. Our focus was on patient experience in primary care; secondary care settings may face different challenges in implementing quality improvement initiatives driven by patient feedback. Recommendations for future research include consideration of alternative feedback methods to better support patients to identify poor care; investigation into the factors driving poorer experiences of communication in South Asian patient groups; further investigation of how best to deliver patient feedback to clinicians to engage them and to foster quality improvement; and further research to support the development and implementation of interventions aiming to improve care when deficiencies in patient experience of care are identified.FundingThe National Institute for Health Research Programme Grants for Applied Research programme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.