One aspect of higher order social cognition is empathy, a psychological construct comprising a cognitive (recognizing emotions) and an affective (responding to emotions) component. The complex nature of empathy complicates the accurate measurement of these components. The most widely used measure of empathy is the Interpersonal Reactivity Index (IRI). However, the factor structure of the IRI as it is predominantly used in the psychological literature differs from Davis's original four-factor model in that it arbitrarily combines the subscales to form two factors: cognitive and affective empathy. This two-factor model of the IRI, although popular, has yet to be examined for psychometric support. In the current study, we examine, for the first time, the validity of this alternative model. A confirmatory factor analysis showed poor model fit for this two-factor structure. Additional analyses offered support for the original four-factor model, as well as a hierarchical model for the scale. In line with previous findings, females scored higher on the IRI than males. Our findings indicate that the IRI, as it is currently used in the literature, does not accurately measure cognitive and affective empathy and highlight the advantages of using the original four-factor structure of the scale for empathy assessments.
As the use of diagnostic assessment systems transitions from research applications to large-scale assessments for accountability purposes, reliability methods that provide evidence at each level of reporting must are needed. The purpose of this paper is to summarize one simulation-based method for estimating and reporting reliability for an operational, large-scale, diagnostic assessment system. This assessment system reports the results and associated reliability evidence at the individual skill level for each academic content standard and broader content strands. The system also summarizes results for the overall subject using achievement levels, which are often included in state accountability metrics. Results are summarized as measures of association between true and estimated mastery status for each level of reporting.
Diagnostic assessments measure the knowledge, skills, and understandings of students at a smaller and more actionable grain size than traditional scale-score assessments. Results of diagnostic assessments are reported as a mastery profile, indicating which knowledge, skills, and understandings the student has mastered and which ones may need more instruction. These mastery decisions are based on probabilities of mastery derived from diagnostic classification models (DCMs).This report outlines a Bayesian framework for the estimation and evaluation of DCMs. Findings illustrate the utility of the Bayesian framework for estimating and evaluating DCMs in applied settings. Specifically, the findings demonstrate how a variety of DCMs can be defined within the same conceptual framework. Additionally, using this framework, the evaluation of model fit is more straightforward and easier to interpret with intuitive graphics. Throughout, recommendations are made for specific implementation decisions for the estimation process and the assessment of model fit.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.