Item response theory (IRT) model applications extend well beyond cognitive ability testing, and various patient-reported outcomes (PRO) measures are among the more prominent examples. PRO (and like) constructs differ from cognitive ability constructs in many ways, and these differences have model fitting implications. With a few notable exceptions, however, most IRT applications to PRO constructs rely on traditional IRT models, such as the graded response model. We review some notable differences between cognitive and PRO constructs and how these differences can present challenges for traditional IRT model applications. We then apply two models (the traditional graded response model and an alternative log-logistic model) to depression measure data drawn from the Patient-Reported Outcomes Measurement Information System project. We do not claim that one model is “a better fit” or more “valid” than the other; rather, we show that the log-logistic model may be more consistent with the construct of depression as a unipolar phenomenon. Clearly, the graded response and log-logistic models can lead to different conclusions about the psychometrics of an instrument and the scaling of individual differences. We underscore, too, that, in general, explorations of which model may be more appropriate cannot be decided only by fit index comparisons; these decisions may require the integration of psychometrics with theory and research findings on the construct of interest.
As naturally occurring examples of folk culture and creativity, internet memes provide a rich testbed to examine the interrelationships among cognitive and motivational factors that influence their impact. In two studies with participants recruited over the internet, we measured a variety of appraisals of both apolitical and political memes with a focus on the role of metaphorical aptness and personal relatability as predictors of comprehensibility and humor. Structural equation modeling was used to analyze interconnections among appraisals. A major network path connects relatability to aptness, which in turn is linked to appraisals of comprehensibility, humor, and propensity to share. For political memes, the congruity of the meme with the person's political position (liberal or conservative) has a powerful but indirect impact on the propensity to share it. These findings indicate that appraisals of memes are based on cognitive and motivational processes that also underlie metaphor comprehension and appreciation of humor.
As part of a scale development project, we fit a nominal response item response theory model to responses to the Health Care Engagement Measure (HEM). When using the original 5-point response format, categories were not ordered as intended for six of the 23 items. For the remaining, the category boundary discrimination between Categories 0 ( not at all true) and 1 ( a little bit true) was only weakly discriminating, suggesting uninformative categories. When the lowest two categories were collapsed, psychometric properties improved greatly. Category boundary discriminations within items, however, varied significantly. Specifically, higher response category distinctions, such as responding 3 ( very true) versus 2 ( mostly true) were considerably more discriminating than lower response category distinctions. Implications for HEM scoring and for improving measurement precision at lower levels of the construct are presented as is the unique role of the nominal response model in category analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.