BackgroundThe idea that underlying, generative mechanisms give rise to causal regularities has become a guiding principle across many social and natural science disciplines. A specific form of this enquiry, realist evaluation is gaining momentum in the evaluation of complex social interventions. It focuses on ‘what works, how, in which conditions and for whom’ using context, mechanism and outcome configurations as opposed to asking whether an intervention ‘works’. Realist evaluation can be difficult to codify and requires considerable researcher reflection and creativity. As such there is often confusion when operationalising the method in practice. This article aims to clarify and further develop the concept of mechanism in realist evaluation and in doing so aid the learning of those operationalising the methodology.DiscussionUsing a social science illustration, we argue that disaggregating the concept of mechanism into its constituent parts helps to understand the difference between the resources offered by the intervention and the ways in which this changes the reasoning of participants. This in turn helps to distinguish between a context and mechanism. The notion of mechanisms ‘firing’ in social science research is explored, with discussions surrounding how this may stifle researchers’ realist thinking. We underline the importance of conceptualising mechanisms as operating on a continuum, rather than as an ‘on/off’ switch.SummaryThe discussions in this article will hopefully progress and operationalise realist methods. This development is likely to occur due to the infancy of the methodology and its recent increased profile and use in social science research. The arguments we present have been tested and are explained throughout the article using a social science illustration, evidencing their usability and value.
The development of these minimum measurement standards is intended to promote the appropriate use of PRO measures to inform PCOR and CER, which in turn can improve the effectiveness and efficiency of healthcare delivery. A next step is to expand these minimum standards to identify best practices for selecting decision-relevant PRO measures.
BackgroundRealist evaluation is increasingly used in health services and other fields of research and evaluation. No previous standards exist for reporting realist evaluations. This standard was developed as part of the RAMESES II project. The project’s aim is to produce initial reporting standards for realist evaluations.MethodsWe purposively recruited a maximum variety sample of an international group of experts in realist evaluation to our online Delphi panel. Panel members came from a variety of disciplines, sectors and policy fields. We prepared the briefing materials for our Delphi panel by summarising the most recent literature on realist evaluations to identify how and why rigour had been demonstrated and where gaps in expertise and rigour were evident. We also drew on our collective experience as realist evaluators, in training and supporting realist evaluations, and on the RAMESES email list to help us develop the briefing materials.Through discussion within the project team, we developed a list of issues related to quality that needed to be addressed when carrying out realist evaluations. These were then shared with the panel members and their feedback was sought. Once the panel members had provided their feedback on our briefing materials, we constructed a set of items for potential inclusion in the reporting standards and circulated these online to panel members. Panel members were asked to rank each potential item twice on a 7-point Likert scale, once for relevance and once for validity. They were also encouraged to provide free text comments.ResultsWe recruited 35 panel members from 27 organisations across six countries from nine different disciplines. Within three rounds our Delphi panel was able to reach consensus on 20 items that should be included in the reporting standards for realist evaluations. The overall response rates for all items for rounds 1, 2 and 3 were 94 %, 76 % and 80 %, respectively.ConclusionThese reporting standards for realist evaluations have been developed by drawing on a range of sources. We hope that these standards will lead to greater consistency and rigour of reporting and make realist evaluation reports more accessible, usable and helpful to different stakeholders.
Integrating PROs in clinical practice has the potential to enhance patient-centered care. The online version of the User's Guide will be updated periodically.
Future research needs to identify ways in with PROs can be better incorporated into the routine care of patients by combining qualitative and quantitative methods and adopting appropriate trial designs.
BackgroundIn this paper, we report the findings of a realist synthesis that aimed to understand how and in what circumstances patient reported outcome measures (PROMs) support patient-clinician communication and subsequent care processes and outcomes in clinical care. We tested two overarching programme theories: (1) PROMs completion prompts a process of self-reflection and supports patients to raise issues with clinicians and (2) PROMs scores raise clinicians’ awareness of patients’ problems and prompts discussion and action. We examined how the structure of the PROM and care context shaped the ways in which PROMs support clinician-patient communication and subsequent care processes.ResultsPROMs completion prompts patients to reflect on their health and gives them permission to raise issues with clinicians. However, clinicians found standardised PROMs completion during patient assessments sometimes constrained rather than supported communication. In response, clinicians adapted their use of PROMs to render them compatible with the ongoing management of patient relationships. Individualised PROMs supported dialogue by enabling the patient to tell their story. In oncology, PROMs completion outside of the consultation enabled clinicians to identify problematic symptoms when the PROM acted as a substitute rather than addition to the clinical encounter and when the PROM focused on symptoms and side effects, rather than health related quality of life (HRQoL). Patients did not always feel it was appropriate to discuss emotional, functional or HRQoL issues with doctors and doctors did not perceive this was within their remit.ConclusionsThis paper makes two important contributions to the literature. First, our findings show that PROMs completion is not a neutral act of information retrieval but can change how patients think about their condition. Second, our findings reveal that the ways in which clinicians use PROMs is shaped by their relationships with patients and professional roles and boundaries. Future research should examine how PROMs completion and feedback shapes and is influenced by the process of building relationships with patients, rather than just their impact on information exchange and decision making.Electronic supplementary materialThe online version of this article (10.1186/s41687-018-0061-6) contains supplementary material, which is available to authorized users.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.