In applications of covariance structure modeling in which an initial model does not fit sample data well, it has become common practice to modify that model to improve its fit. Because this process is data driven, it is inherently susceptible to capitalization on chance characteristics of the data, thus raising the question of whether model modifications generalize to other samples or to the population. This issue is discussed in detail and is explored empirically through sampling studies using 2 large sets of data. Results demonstrate that over repeated samples, model modifications may be very inconsistent and cross-validation results may behave erratically. These findings lead to skepticism about generalizability of models resulting from data-driven modifications of an initial model. The use of alternative a priori models is recommended as a preferred strategy.
Contemporary models of organizational withdrawal stress the importance of perceptions of available alternative positions in translating job affect into behavioral intentions to leave an organization. Data reported in labor market studies in which unemployment rates in a geographical area, industry, or time period are correlated with voluntary termination rates from work organizations overwhelmingly support the hypothesized role of available alternatives in the satisfaction-quit process. Studies conducted at the individual termination-decision level, however, support neither the theoretical propositions nor generalization of the results of analyses of aggregated data to individuals' decisions to leave a job or organization. Three explanations for the inconsistencies between the theories and macrodata on the one hand and individual decisions on the other are offered. An integrated explanation based on the direct influence of economic/unemployment conditions on job affect is presented. This explanation is consistent with theoretical models of social attitudes and job affect formation as well as existing empirical evidence. The role of behavioral alternatives to quitting and the relevance of alternative activities to regular, full-time work are important in this explanation of behavioral and psychological job withdrawal.
Alternative strategies for two-sample cross-validation of covariance structure models are described and investigated. The strategies vary according to whether all (tight strategy) or some (partial strategy) of the model parameters are held constant when a calibration sample solution is re-fit to a validation sample covariance matrix. Justification is provided for three partial strategies. Conventional and alternative strategies for cross-validation are discussed as methods for evaluating overall discrepancy of a model fit to a particular sample, where overall discrepancy arises from the combined influences of discrepancy of approximation and discrepancy of estimation (Cudeck & Henly, 1991). Results of a sampling study using empirical data show that for tighter strategies simpler models are preferred in smaller samples. However, when partial cross-validation is employed, a more complex model may be supported even in a small sample. Implications for model comparison and evaluation, as well as the issues of model complexity and sample size are discussed.
SummaryThis paper addresses Blau's critique of our early research on behavioral aggregates; we review the theoretical and empirical work on multiple behaviors. We address the de®nition and measurement of behavioral aggregates, the issue of common method variance, and factors in¯uencing behavior choices. Research from other behavioral areas that uses an approach similar to ours is reviewed as well as relevant research on withdrawal aggregates since 1991. We argue that a focus on general withdrawal constructs rather than individual behaviors will generate signi®cant scienti®c and practical advantages. The study of constructs will likely provide a basis for generalizations across situations, populations, and time. #
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.