Assessing the correctness of a structural equation model is essential to avoid drawing incorrect conclusions from empirical research. In the past, the chi-square test was recommended for assessing the correctness of the model but this test has been criticized because of its sensitivity to sample size. As a reaction, an abundance of fit indexes have been developed. The result of these developments is that structural equation modeling packages are now producing a large list of fit measures. One would think that this progression has led to a clear understanding of evaluating models with respect to model misspecifications. In this article we question the validity of approaches for model evaluation based on overall goodness-of-fit indexes. The argument against such usage is that they do not provide an adequate indication of the "size" of the model's misspecification. That is, they vary dramatically with the values of incidental parameters that are unrelated with the misspecification in the model. This is illustrated using simple but fundamental models. As an alternative method of model evaluation, we suggest using the expected parameter change in combination with the modification index (MI) and the power of the MI test.In an influential paper, MacCallum, Browne, and Sugawara (1996) wrote, "If the model is truly a good model in terms of its fit in the population, we wish to avoid concluding that the model is a bad one. Alternatively, if the model is truly a bad one, we wish to avoid concluding that it is a good one" (p. 131). The mentioned two types of wrong conclusions correspond to what in statistics are known as Type I and Type II errors, the probabilities of occurrence of which are called ' and " respectively. Although everybody would agree that ' and " should be as small as
covariance structure analysis, maximum likelihood estimation, likelihood ratio test, power of the test, local alternatives, noncentral Chi-square, noncentrality parameter, Monte Carlo experiment,
Although agree-disagree (AD) rating scales suffer from acquiescence response bias, entail enhanced cognitive burden, and yield data of lower quality (Krosnick, 1991; Saris, Revilla, Krosnick, Schaeffer, forthcoming), these scales remain popular with researchers due to practical considerations (e.g., ease of item preparation, speed of administration, reduced administration costs). This paper shows that if researchers want to use AD scales, they should offer 5 answer categories rather than 7 or 11, because the latter yield data of lower quality. This is shown using data from four multitraitmultimethod (MTMM) experiments implemented in the third round of the European Social Survey. The quality of items with different rating scale lengths were computed and compared.
Some firms in internationally oriented industries are internationalized while other comparable firms in the same sector or industry do not. Observing this difference in strategic behavior among small firms led us to consider how differences in CEOs' attitudes, international orientation, and mindset might explain it. Therefore, this study adopts a cognitive perspective on management to explore the formation of the global mindset and the relationship between the global mindset of small-firm decision makers and their firms' internationalization behavior. A theory-based conceptual model and measurement instrument are developed and-using structural equation modeling-the model is estimated based on empirical data from crosssectional samples of small Norwegian and Portuguese firms. The study finds: (1) a strong causal relationship between the global mindset and firms' internationalization behavior; (2) the combination of the findings and substantive theory indicates that the main driver of firms' internationalization operates through the global mindset. This study also covers the factors that strongly influence the formation of a global mindset, especially the decision makers' work experience and personal characteristics in terms
Inspired by the research of Frank Andrews on the reliability and validity of survey questions, a large-scale research project was conducted in the Netherlands. The project was comprised of two different stages. For this project, more than 600 survey questions were included in different surveys according to a multitrait-multimethod design. The resulting data were analyzed in two steps. In the first step, estimates of validity and reliability were obtained for each question. The second step was a meta-analysis of the variation in data quality found in the first step. This variation was related to question-specific characteristics, response scale characteristics, context characteristics, and design characteristics. The article describes how the results of this study can be of practical use. In addition, the authors compare them to results of similar studies in the United States, Austria, and other Western, Central, and Eastern European countries.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.