Our goal in this investigation was to evaluate the reliability of scores from the Balanced Inventory of Desirable Responding (BIDR) more comprehensively than in prior research using a generalizability-theory framework based on both dichotomous and polytomous scoring of items. Generalizability coefficients accounting for specific-factor, transient, and random-response error ranged from .64 to .75 for the BIDR's Self-Deception Enhancement (SDE) and Impression Management (IM) subscale scores, and these values were systematically lower than corresponding alpha (.66 to .83) and 1-week test-retest (.78 to .86) coefficients. Polytomous scoring provided higher reliability than dichotomous scoring on nearly all indexes reported. Random-response (8%-17%) and specific factor error (11%-17%) exceeded transient error (3%-6%) for both subscales and scoring methods. Doubling the number of items on a single occasion provided greater improvements in generalizability (.76-.83) than aggregating scores across 2 administrations (.72-.81). Both scoring methods provided reasonably high indexes of consistency (φ coefficients≥.91) at cut scores on the IM scale for detecting faked responses when all sources of error were taken into account. Implications of these results for common uses of the BIDR are discussed.
Educators and administrators often use sub-scores derived from state accountability assessments to diagnose learning/instruction and inform curriculum planning. However, there are several psychometric limitations of observed sub-scores, two of which were the focus of the present study: (1) limited reliabilities due to short lengths, and (2) little distinct information in sub-scores for most existing assessments. The present study was conducted to evaluate the extent to which these limitations might be overcome by incorporating collateral information into sub-score estimation. The three sources of collateral information under investigation included (1) information from other sub-scores, (2) schools that students attended, and (3) school-level scores on the same test taken by previous cohorts of students in that school. Kelley's and Shin's methods were implemented in a fully Bayesian framework and were adapted to incorporate differing levels of collateral information. Results were evaluated in light of three comparison criteria, i.e., signal noise ratio, standard error of estimate, and sub-score separation index. The data came from state accountability assessments. Consistent with the literature, using information from other sub-scores produced sub-scores with enhanced precision but reduced profile variability. This finding suggests that using collateral information internal to the test has the capability of enhancing subscore reliability, but at the expense of losing the distinctness of each individual sub-score. Using information indicating the schools that students attended led to a small gain in subscore precision without losing sub-score distinctness. Furthermore, using such information was found to have the potential to improve sub-score validity by addressing Simpson's paradox when sub-score correlations were not invariant across schools. Using previous-year school-level sub-score information was found to have the potential to enhance both precision and distinctness for school-level sub-scores, although not for student-level sub-scores. School-level sub-scores were found to exhibit satisfactory To my mom, I owe the deepest debt of gratitude. Without her endless love and devotion, I would never have been who I am or accomplished what I have accomplished today. Perseverance and strong will to fight through adversity are just a few qualities that she has taught me through her life examples. To Johnny, my dear husband, no words can ever express how I appreciate and cherish his presence in my life. If I had gained only his love in coming to the United States, crossing thousands of miles would have been well worth it. His love is amazingly multidimensional: it can be as subtle as preparing a delightful raspberry yogurt breakfast, or as enlightening as introducing me to the wonders of the Bayesian world. During the process of doing my dissertation, he served as a discussant, a critic, a resource finder, a counselor, a food caterer, an entertainer, a distracter, and most importantly, a loving husband and a soul mate...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.