Background Self-report questionnaires are widely used to assess changes in quality of life (QoL) during the course of cancer treatment. However, comparing baseline scores to follow-up scores is only justified if patients' internal measurement standards have not changed over time, that is, no response shift occurred. We aimed to examine response shift in terms of reconceptualization, reprioritization and recalibration among prostate cancer patients. Material and methods We included 402 newly diagnosed patients (mean age 65 years) and assessed QoL at the beginning of cancer treatment and three months later. QoL was measured with the European Organization for Research and Treatment of Cancer Quality of Life Questionnaire Core 30 (EORTC QLQ-C30). We employed structural equation modeling testing measurement invariance between occasions to disentangle 'true' change and change in the measurement model (response shift). Results We found reprioritization effects for both the Physical Functioning and Role Functioning subscales of the EORTC QLQ-C30, indicating that both had gained importance for representing the latent construct of QoL at follow-up. These effects added to the worsening effect evident in the latent construct, thus rendering observed changes even more pronounced. In addition, we found recalibration effects for both the Emotional Functioning and Cognitive Functioning subscales indicating judgments becoming more lenient over time. These effects counteracted 'true' negative changes thus obscuring any substantial changes on the observed level. Conclusion Our results suggest that changes observed in some subscales of the EORTC QLQ-C30 should not be taken at face value as they may be affected by patients' changed measurement standards.
Diverse methods are available for evaluation of (medical) interventions. In each case one has to decide on a specific method. Our aim was to analyze typical problems involved in the measurement of change. Different methods are delineated, and their specific pros and cons are set out. Subsequently, empirically derived recommendations are outlined on which method should be employed for which problem and under which circumstances. A characteristic of rehab treatment is that as a rule a multitude of problems are addressed, and accordingly, treatment goals are heterogenic. Straightforward recommendations for one or the other method cannot be given.
Subjective constructs like health-related quality of life are often investigated in scientific surveys in rehabilitation science, usually assuming that such constructs would be equally defined between different groups in case of cross-sectional control group designs or across time in longitudinal study designs with or without control-groups. Differences between measurements of these constructs were expected to occur only regarding quantity but not regarding quality. However, this assumption cannot be expected to apply in every case and it is discussed from a theoretical angle under the terms of invariance or equivalence of measurements. Confirmatory factor analysis-based approaches are suitable to investigate measurement invariance empirically and will be described in this article. These statistical methods are applicable to test whether qualitative differences in constructs exist between several groups or time points (response shift) and what these differences mean. If measurement invariance cannot be held, comparisons of sum scores, which are often used in rehabilitation science, have to be considered to be questionable. On the basis of a measurement model specific parameters (regression weights, intercepts, measurement errors) can be analyzed both between comparison groups and over time. Different kinds of measurement invariance exist, depending on the statistical definition of parameters which are proven to be equal, and the extent of differences between models. The application of confirmatory factor analysis to test measurement invariance in a cross-sectional design will be described in this article on the example of quality of life data from inpatient rehabilitation. Methodological and substantive aspects which arise if measurement invariance is disproved will be discussed. In a companion article (Jelitte & Schuler, in press) the method will be described for a longitudinal study design and results will be discussed in the context of response shift research.
Interventions in medical rehabilitation are often evaluated using a single-group pre-post study design with health-related quality of life (hrqol) as an outcome variable. Through comparison of mean values in hrqol subscales treatment effects are calculated. In many cases conclusions about changes in hrqol are made depending on the sizes of effects. The precondition for doing this, however, is that the answers to the items in questionnaires were given within the same frames of internal standards, values and conceptualizations at the different time points. Changes in these frames can be found, however, and are discussed under the term response shift, which can happen when adjusting to chronic and progressive diseases. Existence of response shift can be proven with confirmatory factor analysis (CFA) by measuring hrqol at different time points. This approach can be assigned to the broader issue of measurement of invariance in longitudinal studies and is described in a sample of 279 patients with diabetes mellitus. Different response shift processes were detectable. If response shift takes place but is not taken into account inferences from changes in scale scores to changes in hrqol are invalid. This means that the calculation of effect size is also influenced by response shift. By the use of CFA conventionally calculated effect size can be differentiated into either effects due to response shift or 'true change' of hrqol. Measurements of invariance within one group at 2 time points can be differentiated from multiple group analysis at one time point. Investigations of measurement of invariance in longitudinal studies allow for conclusions regarding sensitivity to change of instruments examining hrqol changes. This is important for clinicians who make decisions about which scales are appropriate to detect hrqol changes. For scientific research it is relevant for further analysis of sensitivity to change of hrqol instruments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.