Optimizing patient and public involvement (PPI): Identifying its “essential” and “desirable” principles using a systematic review and modified Delphi methodology
“…This relates to a third key message regarding the need for continued attention to the core principles of high‐quality engagement and that these should be extended to its evaluation (e.g. match engagement goals to methods and recruitment, ensure clarity of communication through all stages, including how the input will be used and the sharing of key reports on the engagement process) . In adhering to principles of good engagement practice, organizations must not only be willing to evaluate their engagement activities but to share the feedback collected and plans for acting on it.…”
Background
As citizens, patients and family members are participating in numerous and expanding roles in health system organizations, attention has turned to evaluating these efforts. The context‐specific nature of engagement requires evaluation tools to be carefully designed for optimal use. We sought to address this need by assessing the appropriateness and feasibility of a generic tool across a range of health system organizations, engagement activities and patient groups.
Methods
We used a mixed‐methods implementation research design to study the implementation of an engagement evaluation tool in seven health system organizations in Ontario, Canada focusing on two key implementation outcome variables: appropriateness and feasibility. Data were collected through respondent feedback questions (binary and open‐ended) at the end of the tool's three questionnaires as well as interviews and debriefing discussions with engagement professionals and patient partners from collaborating organizations.
Results
The three questionnaires comprising the evaluation tool were collectively administered 29 times to 405 respondents yielding a 52% response rate (90% and 53% of respondents respectively assessed the survey's appropriateness and feasibility [quantitatively or qualitatively]). The questionnaires' basic properties were rated highly by all respondents. Concrete suggestions were provided for improving the appropriateness and feasibility of the questionnaires (or components within) for different engagement activity and organization types, and for enhancing the timing of implementation.
Discussion and Conclusions
Our study findings offer guidance for health system organizations and evaluators to support the optimal use of engagement evaluation tools across a variety of health system settings, engagement activities and respondent groups.
“…This relates to a third key message regarding the need for continued attention to the core principles of high‐quality engagement and that these should be extended to its evaluation (e.g. match engagement goals to methods and recruitment, ensure clarity of communication through all stages, including how the input will be used and the sharing of key reports on the engagement process) . In adhering to principles of good engagement practice, organizations must not only be willing to evaluate their engagement activities but to share the feedback collected and plans for acting on it.…”
Background
As citizens, patients and family members are participating in numerous and expanding roles in health system organizations, attention has turned to evaluating these efforts. The context‐specific nature of engagement requires evaluation tools to be carefully designed for optimal use. We sought to address this need by assessing the appropriateness and feasibility of a generic tool across a range of health system organizations, engagement activities and patient groups.
Methods
We used a mixed‐methods implementation research design to study the implementation of an engagement evaluation tool in seven health system organizations in Ontario, Canada focusing on two key implementation outcome variables: appropriateness and feasibility. Data were collected through respondent feedback questions (binary and open‐ended) at the end of the tool's three questionnaires as well as interviews and debriefing discussions with engagement professionals and patient partners from collaborating organizations.
Results
The three questionnaires comprising the evaluation tool were collectively administered 29 times to 405 respondents yielding a 52% response rate (90% and 53% of respondents respectively assessed the survey's appropriateness and feasibility [quantitatively or qualitatively]). The questionnaires' basic properties were rated highly by all respondents. Concrete suggestions were provided for improving the appropriateness and feasibility of the questionnaires (or components within) for different engagement activity and organization types, and for enhancing the timing of implementation.
Discussion and Conclusions
Our study findings offer guidance for health system organizations and evaluators to support the optimal use of engagement evaluation tools across a variety of health system settings, engagement activities and respondent groups.
“…A total of 56 frameworks were written up in 55 academic papers. 5,10,12,31,[33][34][35][36][38][39][40][41][42][43][44][45][47][48][49][50]52,53,[55][56][57][60][61][62][63][64][65][66][67][69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][87][88][89][90]…”
Background
Numerous frameworks for supporting, evaluating and reporting patient and public involvement in research exist. The literature is diverse and theoretically heterogeneous.
Objectives
To identify and synthesize published frameworks, consider whether and how these have been used, and apply design principles to improve usability.
Search strategy
Keyword search of six databases; hand search of eight journals; ancestry and snowball search; requests to experts.
Inclusion criteria
Published, systematic approaches (frameworks) designed to support, evaluate or report on patient or public involvement in health‐related research.
Data extraction and synthesis
Data were extracted on provenance; collaborators and sponsors; theoretical basis; lay input; intended user(s) and use(s); topics covered; examples of use; critiques; and updates. We used the Canadian Centre for Excellence on Partnerships with Patients and Public (CEPPP) evaluation tool and hermeneutic methodology to grade and synthesize the frameworks. In five co‐design workshops, we tested evidence‐based resources based on the review findings.
Results
Our final data set consisted of 65 frameworks, most of which scored highly on the CEPPP tool. They had different provenances, intended purposes, strengths and limitations. We grouped them into five categories: power‐focused; priority‐setting; study‐focused; report‐focused; and partnership‐focused. Frameworks were used mainly by the groups who developed them. The empirical component of our study generated a structured format and evidence‐based facilitator notes for a “build your own framework” co‐design workshop.
Conclusion
The plethora of frameworks combined with evidence of limited transferability suggests that a single, off‐the‐shelf framework may be less useful than a menu of evidence‐based resources which stakeholders can use to co‐design their own frameworks.
“…Limited time, lack of funding, mismatched expectations, negative attitudes and differences in status are described as negatively influencing an ‘equal collaboration’. While some studies underline the need to share power, others pay attention to the ways power is played out by using different theoretical approaches . Different studies aiming to explore power in research collaboration tend to draw the conclusion that power hierarchies still exist …”
Background
Equity is described as an ideal in user involvement in research and is mentioned in the health service literature and in several guidelines. However, equity is described as being difficult to obtain and the concept is rarely clarified or concretized. Equity can be socially constructed.
Objective
This study explored users' and researchers' constructions of equity in research processes.
Design and Method
The study had a qualitative research design. Constructions of equity were analysed through the lens of positioning theory. Two focus group interviews consisting of both users and researchers were conducted.
Findings
The thirteen users and four researchers considered ‘equity’ as an important part of user involvement in research. Storylines about norms, responsibility, language, knowledge and usefulness evolved in the discussions. These storylines elucidated unequal access to rights and duties.
Discussion and conclusion
Users and researchers constructed equity in user involvement differently, but the difference was masked by an apparent agreement. Users and researchers drew on different storylines. The researchers emphasized the scientific discourse and although users acknowledged this discourse, they attempted to oppose this dominant discourse by drawing on a lay discourse. The identified constructions and negotiations of equity may contribute in new understandings of an equal collaboration in user involvement in research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.