BackgroundThe idea that underlying, generative mechanisms give rise to causal regularities has become a guiding principle across many social and natural science disciplines. A specific form of this enquiry, realist evaluation is gaining momentum in the evaluation of complex social interventions. It focuses on ‘what works, how, in which conditions and for whom’ using context, mechanism and outcome configurations as opposed to asking whether an intervention ‘works’. Realist evaluation can be difficult to codify and requires considerable researcher reflection and creativity. As such there is often confusion when operationalising the method in practice. This article aims to clarify and further develop the concept of mechanism in realist evaluation and in doing so aid the learning of those operationalising the methodology.DiscussionUsing a social science illustration, we argue that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants. This in turn helps to distinguish between a context and mechanism. The notion of mechanisms ‘firing’ in social science research is explored, with discussions surrounding how this may stifle researchers’ realist thinking. We underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch.SummaryThe discussions in this article will hopefully progress and operationalise realist methods. This development is likely to occur due to the infancy of the methodology and its recent increased profile and use in social science research. The arguments we present have been tested and are explained throughout the article using a social science illustration, evidencing their usability and value.
BackgroundIn this paper, we report the findings of a realist synthesis that aimed to understand how and in what circumstances patient reported outcome measures (PROMs) support patient-clinician communication and subsequent care processes and outcomes in clinical care. We tested two overarching programme theories: (1) PROMs completion prompts a process of self-reflection and supports patients to raise issues with clinicians and (2) PROMs scores raise clinicians’ awareness of patients’ problems and prompts discussion and action. We examined how the structure of the PROM and care context shaped the ways in which PROMs support clinician-patient communication and subsequent care processes.ResultsPROMs completion prompts patients to reflect on their health and gives them permission to raise issues with clinicians. However, clinicians found standardised PROMs completion during patient assessments sometimes constrained rather than supported communication. In response, clinicians adapted their use of PROMs to render them compatible with the ongoing management of patient relationships. Individualised PROMs supported dialogue by enabling the patient to tell their story. In oncology, PROMs completion outside of the consultation enabled clinicians to identify problematic symptoms when the PROM acted as a substitute rather than addition to the clinical encounter and when the PROM focused on symptoms and side effects, rather than health related quality of life (HRQoL). Patients did not always feel it was appropriate to discuss emotional, functional or HRQoL issues with doctors and doctors did not perceive this was within their remit.ConclusionsThis paper makes two important contributions to the literature. First, our findings show that PROMs completion is not a neutral act of information retrieval but can change how patients think about their condition. Second, our findings reveal that the ways in which clinicians use PROMs is shaped by their relationships with patients and professional roles and boundaries. Future research should examine how PROMs completion and feedback shapes and is influenced by the process of building relationships with patients, rather than just their impact on information exchange and decision making.Electronic supplementary materialThe online version of this article (10.1186/s41687-018-0061-6) contains supplementary material, which is available to authorized users.
BackgroundThe feedback of patient-reported outcome measures (PROMs) data is intended to support the care of individual patients and to act as a quality improvement (QI) strategy.ObjectivesTo (1) identify the ideas and assumptions underlying how individual and aggregated PROMs data are intended to improve patient care, and (2) review the evidence to examine the circumstances in which and processes through which PROMs feedback improves patient care.DesignTwo separate but related realist syntheses: (1) feedback of aggregate PROMs and performance data to improve patient care, and (2) feedback of individual PROMs data to improve patient care.InterventionsAggregate – feedback and public reporting of PROMs, patient experience data and performance data to hospital providers and primary care organisations. Individual – feedback of PROMs in oncology, palliative care and the care of people with mental health problems in primary and secondary care settings.Main outcome measuresAggregate – providers’ responses, attitudes and experiences of using PROMs and performance data to improve patient care. Individual – providers’ and patients’ experiences of using PROMs data to raise issues with clinicians, change clinicians’ communication practices, change patient management and improve patient well-being.Data sourcesSearches of electronic databases and forwards and backwards citation tracking.Review methodsRealist synthesis to identify, test and refine programme theories about when, how and why PROMs feedback leads to improvements in patient care.ResultsProviders were more likely to take steps to improve patient care in response to the feedback and public reporting of aggregate PROMs and performance data if they perceived that these data were credible, were aimed at improving patient care, and were timely and provided a clear indication of the source of the problem. However, implementing substantial and sustainable improvement to patient care required system-wide approaches. In the care of individual patients, PROMs function more as a tool to support patients in raising issues with clinicians than they do in substantially changing clinicians’ communication practices with patients. Patients valued both standardised and individualised PROMs as a tool to raise issues, but thought is required as to which patients may benefit and which may not. In settings such as palliative care and psychotherapy, clinicians viewed individualised PROMs as useful to build rapport and support the therapeutic process. PROMs feedback did not substantially shift clinicians’ communication practices or focus discussion on psychosocial issues; this required a shift in clinicians’ perceptions of their remit.Strengths and limitationsThere was a paucity of research examining the feedback of aggregate PROMs data to providers, and we drew on evidence from interventions with similar programme theories (other forms of performance data) to test our theories.ConclusionsPROMs data act as ‘tin openers’ rather than ‘dials’. Providers need more support and guidance on how to collect their own internal data, how to rule out alternative explanations for their outlier status and how to explore the possible causes of their outlier status. There is also tension between PROMs as a QI strategy versus their use in the care of individual patients; PROMs that clinicians find useful in assessing patients, such as individualised measures, are not useful as indicators of service quality.Future workFuture research should (1) explore how differently performing providers have responded to aggregate PROMs feedback, and how organisations have collected PROMs data both for individual patient care and to improve service quality; and (2) explore whether or not and how incorporating PROMs into patients’ electronic records allows multiple different clinicians to receive PROMs feedback, discuss it with patients and act on the data to improve patient care.Study registrationThis study is registered as PROSPERO CRD42013005938.FundingThe National Institute for Health Research Health Services and Delivery Research programme.
No abstract
ObjectivesInternationally, there has been considerable debate about the role of data in supporting quality improvement in health care. Our objective was to understand how, why and in what circumstances the feedback of aggregated patient-reported outcome measures data improved patient care.MethodsWe conducted a realist synthesis. We identified three main programme theories underlying the use of patient-reported outcome measures as a quality improvement strategy and expressed them as nine ‘if then’ propositions. We identified international evidence to test these propositions through searches of electronic databases and citation tracking, and supplemented our synthesis with evidence from similar forms of performance data. We synthesized this evidence through comparing the mechanisms and impact of patient-reported outcome measures and other performance data on quality improvement in different contexts.ResultsThree programme theories were identified: supporting patient choice, improving accountability and enabling providers to compare their performance with others. Relevant contextual factors were extent of public disclosure, use of financial incentives, perceived credibility of the data and the practicality of the results. Available evidence suggests that patients or their agents rarely use any published performance data when selecting a provider. The perceived motivation behind public reporting is an important determinant of how providers respond. When clinicians perceived that performance indicators were not credible but were incentivized to collect them, gaming or manipulation of data occurred. Outcome data do not provide information on the cause of poor care: providers needed to integrate and interpret patient-reported outcome measures and other outcome data in the context of other data. Lack of timeliness of performance data constrains their impact.ConclusionsAlthough there is only limited research evidence to support some widely held theories of how aggregated patient-reported outcome measures data stimulate quality improvement, several lessons emerge from interventions sharing the same programme theories to help guide the increasing use of these measures.
This exploratory randomised controlled trial examined the effectiveness of a novel short messaging service intervention underpinned by the theory of planned behaviour (TPB) in improving insulin administration in young adults with type 1 diabetes and the role of moderating variables. Those in the intervention condition (N = 8) received one daily text message underpinned by TPB constructs: Attitudes, subjective norms, perceived behavioural control and intention. Those in the control condition (N = 10) received weekly general health messages. Self-reported insulin administration was the main outcome measure; conscientiousness and consideration of future consequences (CFC) were measured as potential moderators. Analyses of covariance revealed no main effects of condition for morning and afternoon injections but a marginally significant effect for evening injections (p = .08). This main effect was qualified by significant interactions of condition with conscientiousness (p = .001), CFC (p = .007) and a three-way interaction among condition, conscientiousness and CFC (p = .009). Exploration of the interactions indicated the intervention significantly improved evening injection rates only in the low conscientiousness and low CFC groups. This effect was particularly strong among those low in both conscientiousness and CFC. Further investigation is warranted, using more objective measures of insulin adherence in a larger sample.
BackgroundLimited access to, understanding of, and trust in paper-based patient information is a key factor influencing paramedic decisions to transfer patients nearing end-of-life to hospital. Practical solutions to this problem are rarely examined in research. This paper explores the extent to which access to, and quality of, patient information affects the care paramedics provide to patients nearing end-of-life, and their views on a shared electronic record as a means of accessing up-to-date patient information.MethodSemi-structured interviews with paramedics (n = 10) based in the north of England, drawn from a group of health and social care professionals (n = 61) participating in a study exploring data recording and sharing practices in end-of-life care. Data were analysed using thematic analysis.ResultsTwo key themes were identified regarding paramedic views of patient information: 1) access to information on patients nearing end-of-life, and 2) views on the proposed EPaCCS. Paramedics reported they are typically unable to access up-to-date patient information, particularly advance care planning documents, and consequently often feel they have little option but to actively treat and transport patients to hospital – a decision not always appropriate for, or desired by, the patient. While paramedics acknowledged a shared electronic record (such as EPaCCs) could support them to provide community-based care where desired and appropriate, numerous practical and technical issues must be overcome to ensure the successful implementation of such a record.ConclusionsAccess to up-to-date patient information is a barrier to paramedics delivering appropriate end-of-life care. Current approaches to information recording are often inconsistent, inaccurate, and inaccessible to paramedics. Whilst a shared electronic record may provide paramedics with greater and timelier access to patient information, meaning they are better able to facilitate community-based care, this is only one of a series of improvements required to enable this to become routine practice.
IntroductionThe feedback and public reporting of PROMs data aims to improve the quality of care provided to patients. Existing systematic reviews have found it difficult to draw overall conclusions about the effectiveness of PROMs feedback. We aim to execute a realist synthesis of the evidence to understand by what means and in what circumstances the feedback of PROMs data leads to the intended service improvements.Methods and analysisRealist synthesis involves (stage 1) identifying the ideas, assumptions or ‘programme theories’ which explain how PROMs feedback is supposed to work and in what circumstances and then (stage 2) reviewing the evidence to determine the extent to which these expectations are met in practice. For stage 1, six provisional ‘functions’ of PROMs feedback have been identified to structure our review (screening, monitoring, patient involvement, demand management, quality improvement and patient choice). For each function, we will identify the different programme theories that underlie these different goals and develop a logical map of the respective implementation processes. In stage 2, we will identify studies that will provide empirical tests of each component of the programme theories to evaluate the circumstances in which the potential obstacles can be overcome and whether and how the unintended consequences of PROMs feedback arise. We will synthesise this evidence to (1) identify the implementation processes which support or constrain the successful collation, interpretation and utilisation of PROMs data; (2) identify the implementation processes through which the unintended consequences of PROMs data arise and those where they can be avoided.Ethics and disseminationThe study will not require NHS ethics approval. We have secured ethical approval for the study from the University of Leeds (LTSSP-019). We will disseminate the findings of the review through a briefing paper and dissemination event for National Health Service stakeholders, conferences and peer reviewed publications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.