BackgroundIncreasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix.Methods and findingsTwo independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) ‘primary research-related impact’, (2) ‘influence on policy making’, (3) ‘health and health systems impact’, (4) ‘health-related and societal impact’, and (5) ‘broader economic impact’. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched.ConclusionsThe measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.
BackgroundCore outcome sets (COS) help to minimise bias in trials and facilitate evidence synthesis. Delphi surveys are increasingly being used as part of a wider process to reach consensus about what outcomes should be included in a COS. Qualitative research can be used to inform the development of Delphi surveys. This is an advance in the field of COS development and one which is potentially valuable; however, little guidance exists for COS developers on how best to use qualitative methods and what the challenges are. This paper aims to provide early guidance on the potential role and contribution of qualitative research in this area. We hope the ideas we present will be challenged, critiqued and built upon by others exploring the role of qualitative research in COS development.This paper draws upon the experiences of using qualitative methods in the pre-Delphi stage of the development of three different COS. Using these studies as examples, we identify some of the ways that qualitative research might contribute to COS development, the challenges in using such methods and areas where future research is required.ResultsQualitative research can help to identify what outcomes are important to stakeholders; facilitate understanding of why some outcomes may be more important than others, determine the scope of outcomes; identify appropriate language for use in the Delphi survey and inform comparisons between stakeholder data and other sources, such as systematic reviews. Developers need to consider a number of methodological points when using qualitative research: specifically, which stakeholders to involve, how to sample participants, which data collection methods are most appropriate, how to consider outcomes with stakeholders and how to analyse these data. A number of areas for future research are identified.ConclusionsQualitative research has the potential to increase the research community’s confidence in COS, although this will be dependent upon using rigorous and appropriate methodology. We have begun to identify some issues for COS developers to consider in using qualitative methods to inform the development of Delphi surveys in this article.
Background Patient-reported outcomes (PROs) are captured within cancer trials to help future patients and their clinicians make more informed treatment decisions. However, variability in standards of PRO trial design and reporting threaten the validity of these endpoints for application in clinical practice. Methods We systematically investigated a cohort of randomized controlled cancer trials that included a primary or secondary PRO. For each trial, an evaluation of protocol and reporting quality was undertaken using standard checklists. General patterns of reporting where also explored. Results Protocols (101 sourced, 44.3%) included a mean (SD) of 10 (4) of 33 (range = 2–19) PRO protocol checklist items. Recommended items frequently omitted included the rationale and objectives underpinning PRO collection and approaches to minimize/address missing PRO data. Of 160 trials with published results, 61 (38.1%, 95% confidence interval = 30.6% to 45.7%) failed to include their PRO findings in any publication (mean 6.43-year follow-up); these trials included 49 568 participants. Although two-thirds of included trials published PRO findings, reporting standards were often inadequate according to international guidelines (mean [SD] inclusion of 3 [3] of 14 [range = 0–11]) CONSORT PRO Extension checklist items). More than one-half of trials publishing PRO results in a secondary publication (12 of 22, 54.5%) took 4 or more years to do so following trial closure, with eight (36.4%) taking 5–8 years and one trial publishing after 14 years. Conclusions PRO protocol content is frequently inadequate, and nonreporting of PRO findings is widespread, meaning patient-important information may not be available to benefit patients, clinicians, and regulators. Even where PRO data are published, there is often considerable delay and reporting quality is suboptimal. This study presents key recommendations to enhance the likelihood of successful delivery of PROs in the future.
BackgroundPatient-reported outcomes (PROs), such as health-related quality of life (HRQL) are increasingly used to evaluate treatment effectiveness in clinical trials, are valued by patients, and may inform important decisions in the clinical setting. It is of concern, therefore, that preliminary evidence, gained from group discussions at UK-wide Medical Research Council (MRC) quality of life training days, suggests there are inconsistent standards of HRQL data collection in trials and appropriate training and education is often lacking. Our objective was to investigate these reports, to determine if they represented isolated experiences, or were indicative of a potentially wider problem.Methods And FindingsWe undertook a qualitative study, conducting 26 semi-structured interviews with research nurses, data managers, trial coordinators and research facilitators involved in the collection and entry of HRQL data in clinical trials, across one primary care NHS trust, two secondary care NHS trusts and two clinical trials units in the UK. We used conventional content analysis to analyze and interpret our data. Our study participants reported (1) inconsistent standards in HRQL measurement, both between, and within, trials, which appeared to risk the introduction of bias; (2), difficulties in dealing with HRQL data that raised concern for the well-being of the trial participant, which in some instances led to the delivery of non-protocol driven co-interventions, (3), a frequent lack of HRQL protocol content and appropriate training and education of trial staff, and (4) that HRQL data collection could be associated with emotional and/or ethical burden.ConclusionsOur findings suggest there are inconsistencies in the standards of HRQL data collection in some trials resulting from a general lack of HRQL-specific protocol content, training and education. These inconsistencies could lead to biased HRQL trial results. Future research should aim to develop HRQL guidelines and training programmes aimed at supporting researchers to carry out high quality data collection.
BackgroundPatient-reported outcome measures (PROMs) can provide valuable information which may assist with the care of patients with chronic kidney disease (CKD). However, given the large number of measures available, it is unclear which PROMs are suitable for use in research or clinical practice. To address this we comprehensively evaluated studies that assessed the measurement properties of PROMs in adults with CKD.MethodsFour databases were searched; reference list and citation searching of included studies was also conducted. The COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) checklist was used to appraise the methodological quality of the included studies and to inform a best evidence synthesis for each PROM.ResultsThe search strategy retrieved 3,702 titles/abstracts. After 288 duplicates were removed, 3,414 abstracts were screened and 71 full-text articles were retrieved for further review. Of these, 24 full-text articles were excluded as they did not meet the eligibility criteria. Following reference list and citation searching, 19 articles were retrieved bringing the total number of papers included in the final analysis to 66. There was strong evidence supporting internal consistency and moderate evidence supporting construct validity for the Kidney Disease Quality of Life-36 (KDQOL-36) in pre-dialysis patients. In the dialysis population, the KDQOL-Short Form (KDQOL-SF) had strong evidence for internal consistency and structural validity and moderate evidence for test-retest reliability and construct validity while the KDQOL-36 had moderate evidence of internal consistency, test-retest reliability and construct validity. The End Stage Renal Disease-Symptom Checklist Transplantation Module (ESRD-SCLTM) demonstrated strong evidence for internal consistency and moderate evidence for test-retest reliability, structural and construct validity in renal transplant recipients.ConclusionsWe suggest considering the KDQOL-36 for use in pre-dialysis patients; the KDQOL-SF or KDQOL-36 for dialysis patients and the ESRD-SCLTM for use in transplant recipients. However, further research is required to evaluate the measurement error, structural validity, responsiveness and patient acceptability of PROMs used in CKD.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.