Student success and persistence within the major and university were examined through hierarchical linear and logistic regression analyses for two cohorts of engineering students. Indicators of success and persistence were based on theoretical and empirical evidence and included both cognitive and noncognitive variables. Cognitive variables included high school rank, SAT scores, and university cumulative grade point average. Noncognitive factors included academic motivation and institutional integration. Outcome variables included grade point average, enrollment at the university, and status as an engineering major. Gender differences also were evaluated. Several significant relationships among the variables were found. For instance, increased levels of motivation were significantly related to continuing in the major. Implications and directions for future research are discussed.
The Revised Two-Factor Study Process Questionnaire (R-SPQ-2F) is a measure of university students’ approach to learning. Original evaluation of the scale’s psychometric properties was based on a sample of Hong Kong university students’ scores. The purpose of this study was to test and cross-validate the R-SPQ-2F factor structure, based on separate cohort data (Cohort 1: n = 1,490; Cohort 2: n = 1,533), among students attending a university in the United States. Factor analytic results did not support the scale’s original factor structure, instead suggesting an alternative four-factor model of the scale data. In the cross-validation study, multisample confirmatory factor analysis results indicated that the scale’s measurement model parameters (e.g., factor loadings) were invariant across independent samples. Despite support for the scale’s respecified factor structure for Western university students, continued research is recommended to improve the scale’s psychometric properties. Implications for test score use and interpretation are discussed.
Instead of using small fixed-length tests, clinicians can create item banks with a large item pool, and a small set of the items most relevant for a given individual can be administered with no loss of information, yielding a dramatic reduction in administration time and patient and clinician burden.
Objective-This study investigated the combination of item response theory and computerized adaptive testing (CAT) for psychiatric measurement as a means of reducing the burden of research and clinical assessments.Methods-Data were from 800 participants in outpatient treatment for a mood or anxiety disorder; they completed 616 items of the 626-item Mood and Anxiety Spectrum Scales (MASS) at two times. The first administration was used to design and evaluate a CAT version of the MASS by using post hoc simulation. The second confirmed the functioning of CAT in live testing.Results-Tests of competing models based on item response theory supported the scale's bifactor structure, consisting of a primary dimension and four group factors (mood, panic-agoraphobia, obsessive-compulsive, and social phobia). Both simulated and live CAT showed a 95% average reduction (585 items) in items administered (24 and 30 items, respectively) compared with administration of the full MASS. The correlation between scores on the full MASS and the CAT version was .93. For the mood disorder subscale, differences in scores between two groups of depressed patients-one with bipolar disorder and one without-on the full scale and on the CAT showed effect sizes of .63 (p<.003) and 1.19 (p<.001) standard deviation units, respectively, indicating better discriminant validity for CAT.Conclusions-Instead of using small fixed-length tests, clinicians can create item banks with a large item pool, and a small set of the items most relevant for a given individual can be administered with no loss of information, yielding a dramatic reduction in administration time and patient and clinician burden.Psychiatric measurement has been based primarily on subjective judgment and classical test theory. Typically, impairment level is determined by a total score, which requires that the same items be administered to all respondents. An alternative to administration of a full scale is disclosures The authors report no competing interests. This form of testing has recently emerged in mental health research (3,4). Procedures based on item response theory (5) can be used to obtain estimates for items (for example, difficulty or discrimination) and individuals (for example, severity of depression) to more efficiently identify suitable item subsets for each individual. This approach to testing is referred to as computerized adaptive testing (CAT) and is immediately applicable to psychiatric services (6-10). For example, a depression inventory can be administered adaptively, such that an individual responds only to items that are most appropriate for assessing his or her level of depression. The net result is that a small, optimal number of items is administered to the individual without loss of measurement precision. NIH Public AccessA complication of applying item response theory to psychiatric measurement problems is that unlike traditional ability testing (for example, mathematics achievement), for which approximately unidimensional scales are used, psychiatric...
The transformative power of compassion is critical to leader performance and has garnered increasing interest in business settings. Despite substantive contributions toward the conceptual understanding of compassion, prior empirical work on the relationship between compassion and leader performance is relatively limited. This article presents compassionate leader behavior as a conceptualization of a new leadership construct. A two-stage, sequential, and equal status mixed method research design was utilized to develop and validate a measure of compassionate leadership. Study 1 used a phenomenological approach to understand how leaders engage with compassion and how their experiences and behaviors associated with compassion affect performance within the context of their leadership. FindingsThe Compassionate Leader Behavior Index (CBLI) is permitted for broad use in noncommercial settings, including but not limited to academically focused research to include dissertations and theses and original works of scholarship and grant activity within the limitations of the publication copyright, so long as this work is appropriately and correctly cited. To use the instrument in a commercial and/or for-profit setting, or for questions regarding permission of
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.