Numerous rules-of-thumb have been suggested for determining the minimum number of subjects required to conduct multiple regression analyses. These rules-of-thumb are evaluated by comparing their results against those based on power analyses for tests of hypotheses of multiple and partial correlations. The results did not support the use of rules-of-thumb that simply specify some constant (e.g., 100 subjects) as the minimum number of subjects or a minimum ratio of number of subjects (N) to number of predictors (m). Some support was obtained for a rule-of-thumb that N ≥ 50 + 8 m for the multiple correlation and N ≥104 + m for the partial correlation. However, the rule-of-thumb for the multiple correlation yields values too large for N when m ≥ 7, and both rules-of-thumb assume all studies have a medium-size relationship between criterion and predictors. Accordingly, a slightly more complex rule-of thumb is introduced that estimates minimum sample size as function of effect size as well as the number of predictors. It is argued that researchers should use methods to determine sample size that incorporate effect size.
Confusion in the literature between the concepts of internal consistency and homogeneity has led to a misuse of coefficient alpha as an index of item homogeneity. Coefficient alpha is actually a complexly determined test statistic, item homogeneity only being one influence on its magnitude. The related statistic, the average intercorrelation, has similar difficulties. Several indices of item homogeneity derived from the model of common factor analysis are offered as alternatives.
The root mean square error of approximation (RMSEA) and the comparative fit index (CFI) are two widely applied indices to assess fit of structural equation models. Because these two indices are viewed positively by researchers, one might presume that their values would yield comparable qualitative assessments of model fit for any data set. When RMSEA and CFI offer different evaluations of model fit, we argue that researchers are likely to be confused and potentially make incorrect research conclusions. We derive the necessary as well as the sufficient conditions for inconsistent interpretations of these indices. We also study inconsistency in results for RMSEA and CFI at the sample level. Rather than indicating that the model is misspecified in a particular manner or that there are any flaws in the data, the two indices can disagree because (a) they evaluate, by design, the magnitude of the model's fit function value from different perspectives; (b) the cutoff values for these indices are arbitrary; and (c) the meaning of "good" fit and its relationship with fit indices are not well understood. In the context of inconsistent judgments of fit using RMSEA and CFI, we discuss the implications of using cutoff values to evaluate model fit in practice and to design SEM studies.
After noting the contradictions and confusion in the literature on determining the optimal number of scale points in a rating scale, a mathematical model is suggested that allows for the simulation of the rating situation. The model involves generating data with different item variance-covariance structures and with different numbers of scale points. Such data were generated and used to calculate three reliability measures. The effects of different numbers of scale points and different covariance structures upon these reliability measures were examined, and the results help explain a large number of empirical studies exploring the "optimal number of scale points" problem. Also discussed are the implications of these data for the test user.
Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria were examined. The 95th percentile criterion was preferable for assessing the first eigenvalue using either extraction method. In assessing subsequent eigenvalues, PA-PCA tended to perform as well as or better than PA-PAF for models with one factor or multiple minimally correlated factors; the relative performance of the mean eigenvalue and the 95th percentile eigenvalue criteria depended on the number of variables per factor. PA-PAF using the mean eigenvalue criterion generally performed best if factors were more than minimally correlated or if one or more strong general factors as well as group factors were present.
Time-use diaries were collected over a 3-year period for 2 cohorts of 2-and 4-year-old children. TV viewing declined with age. Time spent in reading and educational activities increased with age on weekdays but declined on weekends. Time-use patterns were sex-stereotyped, and sex differences increased with age. As individuals' time in educational activities, social interaction, and video games increased, their time watching entertainment TV declined, but time spent playing covaried positively with entertainment TV. Educational TV viewing was not related to time spent in non-TV activities. Maternal education and home environment quality predicted frequent viewing of educational TV programs and infrequent viewing of entertainment TV. The results do not support a simple displacement hypothesis; the relations of TV viewing to other activities depend on the program content, the nature of the competing activity, and the environmental context.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.