BackgroundThe Internet is increasingly considered to be an efficient medium for assessing the quality of health care seen from the patients’ perspective. Potential benefits of Internet surveys such as time efficiency, reduced effort, and lower costs should be balanced against potential weaknesses such as low response rates and accessibility for only a subset of potential participants. Combining an Internet questionnaire with a traditional paper follow-up questionnaire (mixed-mode survey) can possibly compensate for these weaknesses and provide an alternative to a postal survey.ObjectiveTo examine whether there are differences between a mixed-mode survey and a postal survey in terms of respondent characteristics, response rate and time, quality of data, costs, and global ratings of health care or health care providers (general practitioner, hospital care in the diagnostic phase, surgeon, nurses, radiotherapy, chemotherapy, and hospital care in general).MethodsDifferences between the two surveys were examined in a sample of breast care patients using the Consumer Quality Index Breast Care questionnaire. We selected 800 breast care patients from the reimbursement files of Dutch health insurance companies. We asked 400 patients to fill out the questionnaire online followed by a paper reminder (mixed-mode survey) and 400 patients, matched by age and gender, received the questionnaire by mail only (postal survey). Both groups received three reminders.ResultsThe respondents to the two surveys did not differ in age, gender, level of education, or self-reported physical and psychological health (all Ps > .05). In the postal survey, the questionnaires were returned 20 days earlier than in the mixed-mode survey (median 12 and 32 days, respectively; P < .001), whereas the response rate did not differ significantly (256/400, 64.0% versus 242/400, 60.5%, respectively; P = .30). The costs were lower for the mixed-mode survey (€2 per questionnaire). Moreover, there were fewer missing items (3.4% versus 4.4%, P = .002) and fewer invalid answers (3.2% versus 6.2%, P < .001) in the mixed-mode survey than in the postal survey. The answers of the two respondent groups on the global ratings did not differ. Within the mixed-mode survey, 52.9% (128/242) of the respondents filled out the questionnaire online. Respondents who filled out the questionnaire online were significantly younger (P < .001), were more often highly educated (P = .002), and reported better psychological health (P = .02) than respondents who filled out the paper questionnaire. Respondents to the paper questionnaire rated the nurses significantly more positively than respondents to the online questionnaire (score 9.2 versus 8.4, respectively; χ2 1 = 5.6).ConclusionsMixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys in...
Background: Patient reported outcomes (PROs) provide information on a patient's health status coming directly from the patient. Measuring PROs with patient reported outcome measures (PROMs) has gained wide interest in clinical practice for individual patient care, as well as in quality improvement, and for providing transparency of outcomes to stakeholders through public reporting. However, current knowledge of selecting and implementing PROMs for these purposes is scattered, and not readily available for clinicians and quality managers in healthcare organizations. The objective of this study is to develop a framework with tools to support the systematic selection, implementation and evaluation of PROs and PROMs in individual patient care, for quality improvement and public reporting. Methods: We developed the framework in a national project in the Netherlands following a user-centered design. The development process of the framework contained five iterative components: (a) identification of existing tools, (b) identification of user requirements and designing steps for selection and implementation of PROs and PROMs, (c) discussing a prototype of the framework during a national workshop, (d) developing a web version, (e) pretesting of the framework. A total of 40 users with different perspectives (clinicians, patient representatives, quality managers, purchasers, researchers) have been consulted. Results: The final framework is presented as the PROM-cycle that consists of eight steps in four phases: (1) goal setting, (2) selecting PROs and PROMs, (3) developing and testing of quality indicator(s), (4) implementing and evaluating the PROM(s) and indicator(s). Users emphasized that the first step is the key element in which the why, for whom and setting of the PROM has to be defined. This information is decisive for the following steps. For each step the PROM-cycle provides guidance and tools, with instruments, checklists, methods, handbooks, and standards supporting the process. Conclusion: We developed a framework to support the selection and implementation of PROs and PROMs. Each step provides guidance and tools to support the process. The PROM-cycle and its tools are publicly available and can be used by clinicians, quality managers, patient representatives and other experts involved in using PROMS. Through periodic evaluation and updates, tools will be added for national and international use of the PROM-cycle.
Methods. Patients with RA(n=590) received this survey were they rated their actual experiences and what they find important in rheumatic healthcare. Descriptive analyses and psychometric methods were used to test the reliability.Results. The response rate was 69%. The items in the pilot instrument could be grouped into ten scales (α ranged from 0.77 to 0.94). The most important quality aspects according to patients concerned the alertness when prescribing medication. Providing patients with information on a special website of the hospital about RA was the highest quality improvement aspect. Conclusion.The results of this study show that the CQ-index RA is a reliable instrument for quality assessment from the patients" perspective. The instrument provides rheumatologists and other caregivers with feedback for service improvement initiatives. Key indexing terms :consumer; experiences; rheumatoid arthritis; quality of healthcare 3 Linking powered by eXtyles INTRODUCTIONQuality of care has become increasingly important in the evaluation of healthcare and healthcare services (1-3)]. Evaluating rheumatic healthcare quality is a major issue given the care need profile of patients with rheumatoid arthritis(RA) and their long-term dependency on healthcare (4)]. Evaluation of quality of care is often performed by healthcare professionals.However, patients" perspectives on healthcare quality differ from the views of healthcare professionals and policy makers (5-7). Also, patients" perspectives on the quality of care have become more prominent in research and policy since the introduction of the concept of patient-centered care in many countries (8;9). This concept aims to empower patients with respect to their healthcare decisions and to (re)structure the healthcare system according to their needs.Patients" views on quality of care have often been conceptualised as patient satisfaction(10-12). A disadvantage of these surveys is that the scores are extremely subjective, highly skewed(>90% are satisfied), and influenced by personal preference and patient expectation (13). Caregivers and healthcare services can not influence patients expectations, but can change the actual experiences. Therefore, a more refined and less subjective instrument for evaluating healthcare quality from the patients" perspective seems necessary. The Consumer Quality index (CQ-index) provides such an instrument(14;15).The CQ-index is based on two families of surveys that measures patients" experiences.The first family of surveys that is used is the Consumer Assessment of Healthcare Providers and Systems (CAHPS ® ), which is well-established and widely used in the USA (www.cahps.ahrq.gov). This methodology comprises standardized protocols and manuals concerning sampling, data collection, data-entry, data-analysis and data reporting, which are 4 Linking powered by eXtyles also used as reference for CQ-index research. Furthermore, the lay-out and answering categories on a four-point scale (never, sometimes, usually, always), three-point scale (not a pro...
Background The International Consortium for Health Outcomes Measurement (ICHOM) develops condition-specific Standard Sets of outcomes to be measured in clinical practice for value-based healthcare evaluation. Standard Sets are developed by different working groups, which is inefficient and may lead to inconsistencies in selected PROs and PROMs. We aimed to identify common PROs across ICHOM Standard Sets and examined to what extend these PROs can be measured with a generic set of PROMs: the Patient-Reported Outcomes Measurement Information System (PROMIS®). Methods We extracted all PROs and recommended PROMs from 39 ICHOM Standard Sets. Similar PROs were categorized into unique PRO concepts. We examined which of these PRO concepts can be measured with PROMIS. Results A total of 307 PROs were identified in 39 ICHOM Standard Sets and 114 unique PROMs are recommended for measuring these PROs. The 307 PROs could be categorized into 22 unique PRO concepts. More than half (17/22) of these PRO concepts (covering about 75% of the PROs and 75% of the PROMs) can be measured with a PROMIS measure. Conclusion Considerable overlap was found in PROs across ICHOM Standard Sets, and large differences in terminology used and PROMs recommended, even for the same PROs. We recommend a more universal and standardized approach to the selection of PROs and PROMs. Such an approach, focusing on a set of core PROs for all patients, measured with a system like PROMIS, may provide more opportunities for patient-centered care and facilitate the uptake of Standard Sets in clinical practice.
Although measuring client experiences obligatory, it is not sufficient guarantee that client feedback is used for quality improvement. Although measuring client experiences has led to various improvement initiatives, their effectiveness remains unclear. There is need for guidance on effective improvement of client experiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.