Discriminant validity was originally presented as a set of empirical criteria that can be assessed from multitrait-multimethod (MTMM) matrices. Because datasets used by applied researchers rarely lend themselves to MTMM analysis, the need to assess discriminant validity in empirical research has led to the introduction of numerous techniques, some of which have been introduced in an ad hoc manner and without rigorous methodological support. We review various definitions of and techniques for assessing discriminant validity and provide a generalized definition of discriminant validity based on the correlation between two measures after measurement error has been considered. We then review techniques that have been proposed for discriminant validity assessment, demonstrating some problems and equivalencies of these techniques that have gone unnoticed by prior research. After conducting Monte Carlo simulations that compare the techniques, we present techniques called CICFA(sys) and [Formula: see text](sys) that applied researchers can use to assess discriminant validity.
This study disproves the following six common misconceptions about coefficient alpha: (a) Alpha was first developed by Cronbach. (b) Alpha equals reliability. (c) A high value of alpha is an indication of internal consistency. (d) Reliability will always be improved by deleting items using “alpha if item deleted.” (e) Alpha should be greater than or equal to .7 (or, alternatively, .8). (f) Alpha is the best choice among all published reliability coefficients. This study discusses the inaccuracy of each of these misconceptions and provides a correct statement. This study recommends that the assumptions of unidimensionality and tau-equivalency be examined before the application of alpha and that structural equation modeling (SEM)–based reliability estimators be substituted for alpha when one of these conditions is not satisfied. This study also provides formulas for SEM-based reliability estimators that do not rely on matrix notation and step-by-step explanations for the computation of SEM-based reliability estimates.
The current conventions for test score reliability coefficients are unsystematic and chaotic. Reliability coefficients have long been denoted using names that are unrelated to each other, with each formula being generated through different methods, and they have been represented inconsistently. Such inconsistency prevents organizational researchers from understanding the whole picture and misleads them into using coefficient alpha unconditionally. This study provides a systematic naming convention, formula-generating methods, and methods of representing each of the reliability coefficients. This study offers an easy-to-use solution to the issue of choosing between coefficient alpha and composite reliability. This study introduces a calculator that enables its users to obtain the values of various multidimensional reliability coefficients with a few mouse clicks. This study also presents illustrative numerical examples to provide a better understanding of the characteristics and computations of reliability coefficients.
Controversy over which reliability estimators should be used persists due to a lack of knowledge about their accuracy. Simulation is an effective tool to obtain an answer, but existing simulation studies yield contradictory results regarding which reliability estimators are the best. The causes of these inconsistent conclusions have yet to be discussed. This study reanalyzes existing studies to understand these contradictions. The most important reason is that previous studies consider only a few reliability estimators. This study examines approximately 30 reliability estimators and finds that there is no single, most accurate reliability estimator across all data types. Instead, several reliability estimators are accurate to comparable levels for unidimensional data (congeneric reliability, Guttman's lambda2, and ten Berge-Zegers's mu). Likewise, multiple reliability estimators perform similarly for multidimensional data (multidimensional parallel reliability, correlated factors reliability, and second-order factor reliability). Whereas many recent studies support factor analysis (FA) reliability estimators, this study shows that not all FA reliability estimators are accurate and that some cause severe overestimation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.