Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data metaanalysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.
The efficacy of treatments for depression is often measured by comparing observed total scores on self-report inventories, in both clinical practice and research. However, the occurrence of response shifts (changes in subjects' values, or their standards for measurement) may limit the validity of such comparisons. As most psychological treatments for depression are aimed at changing patients' values and frame of reference, response shifts are likely to occur over the course of such treatments. In this article, we tested whether response shifts occurred over the course of treatment in an influential randomized clinical trial. Using confirmatory factor analysis, measurement models underlying item scores on the Beck Depression Inventory (Beck & Beamesderfer, 1974) of the National Institute of Mental Health Treatment of Depression Collaborative Research Program (Elkin, Parloff, Hadley, & Autry, 1985) were analyzed. Compared with before treatment, after-treatment item scores appeared to overestimate depressive symptomatology, measurement errors were smaller, and correlations between constructs were stronger. These findings indicate a response shift, in the sense that participants seem to get better at assessing their level of depressive symptomatology. Comparing measurement models of patients receiving psychotherapy and medication suggested that the aforementioned effects were more apparent in the psychotherapy groups. Consequently, comparisons of observed total scores on self-report inventories may yield confounded measures of treatment efficacy.
A loglinear IRT model is proposed that relates polytomously scored item responses to a multidimensional latent space. The analyst may specify a response function for each response, indicating which latent abilities are necessary to arrive at that response. Each item may have a different number of response categories, so that free response items are more easily analyzed. Conditional maximum likelihood estimates are derived and the models may be tested generally or against alternative loglinear IRT models.Key words: multidimensional item response theory, loglinear model, Rasch model, multidimensional Rasch model, polytomous responses, partial credit model, goodness-of-fit testing.Educational and psychological tests or item banks are ordinarily used to measure individual differences that are inferred from behavior. A test typically consist of a set of items varying with respect to certain task properties that may present difficulties the subject has to overcome to give the correct response. Most tests are constructed in such a way that each item presents a problem that can be solved by some characteristic cognitive behavior that the test intends to measure. Item properties that present problems irrelevant to the measurement purpose are manipulated in such a way that they become very easy for most subjects. In this way items are constructed that measure the behavior of interest.Item response theory (IRT) models, such as the one-, two-and three-parameter logistic model are suited to explain a subject's response on each of the items by a subject parameter and one or more item parameters. Typically, model parameters characterize both items and subjects on one single latent trait. Likewise, for the case of polytomously scored items, IRT models have been proposed that relate responses to a single underlying latent trait (
Mixture item response theory (IRT) models aid the interpretation of response behavior on personality tests and may provide possibilities for improving prediction. Heterogeneity in the population is modeled by identifying homogeneous subgroups that conform to different measurement models. In this study, mixture IRT models were applied to the Extroversion and Neuroticism scales of the Amsterdam Biographical Questionnaire, and a three-class mixture version of the nominal response model was identified as the best fitting model. The latent classes differed with respect to social desirability and ethnic background. Within latent classes, response tendencies demonstrated a differential use of the ``?'' category. An important issue is whether applying mixture IRT models results in a better prediction of relevant external criteria compared to a one-class model. For the Neuroticism scale the prediction improved, but not for the Extraversion scale. The results demonstrate the possible advantage of applying mixture IRT models to personality questionnaires.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.