JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Assessing the applicability of frameworks developed in one country to other countries is an important step in establishing the generalizability of consumer behavior theories. In order for such comparisons to be meaningful, however, the instruments used to measure the theoretical constructs of interest have to exhibit adequate cross-national equivalence. We review the various forms of measurement invariance that have been proposed in the literature, organize them into a coherent conceptual framework that ties different requirements of measure equivalence to the goals of the research, and propose a practical, sequential testing procedure for assessing measurement invariance in cross-national consumer research. The approach is based on multisample confirmatory factor analysis and clarifies under what conditions meaningful comparisons of construct conceptualizations, construct means, and relationships between constructs are possible. An empirical application dealing with the single-factor construct of consumer ethnocentrism in Belgium, Great Britain, and Greece is provided to illustrate the procedure. The University of Chicago PressA might be due to true differences between countries on the underlying construct or due to systematic biases in the fuller understanding of consumer behavior and further advancement of consumer research as an academic discipline requires that the validity of models of way people from different countries respond to certain items. Similarly, cross-national differences in relationconsumer behavior developed in one country (mostly the United States) be examined in other countries as well ships between scale scores could indicate real differences in structural relations between constructs or scaling arti- (Bagozzi 1994; Dholakia, Firat, and Bagozzi 1980). A key concern in extending theories and their associated facts, differences in scale reliability, or even nonequivalence of the constructs involved. Findings of no differconstructs to other countries is whether the instruments designed to measure the relevant constructs are crossences between countries are open to analogous alternative interpretations. As succinctly stated by Horn (1991, p. nationally invariant (Hui and Triandis 1985). Measurement invariance refers to ''whether or not, under different 119): ''Without evidence of measurement invariance, the conditions of observing and studying phenomena, meaconclusions of a study must be weak.'' surement operations yield measures of the same attribute'' Although a variety of techniques have been used to (Horn and McArdle 1992, p. 117). If evidence supporting assess various aspects of measurement equivalence (cf. a measure's invariance is lacking, conclusions ba...
Background: This study examined (1) the factor structure of a depressive symptoms scale (DSS), (2) the sex and longitudinal invariance of the DSS, and (3) the predictive validity of the DSS scale during adolescence in terms of predicting depression and anxiety symptoms in early adulthood. Methods: Data were drawn from the Nicotine Dependence in Teens (NDIT) study, an ongoing prospective cohort study of 1,293 adolescents. Results: The analytical sample included 527 participants who provided complete data or had minimal missing data over follow-up. Confirmatory factor analysis revealed that an intercorrelated three-factor model with somatic, depressive, and anxiety factors provided the best fit. Further, this model was invariant across sex and time. Finally, DSS scores at Time 3 correlated significantly with depressive and anxiety symptoms measured at Time 4. Conclusions: Results suggest that the DSS is multidimensional and that it is a suitable instrument to examine sex differences in somatic, depressive, and anxiety symptoms, as well as changes in these symptoms over time in adolescents. In addition, it could be used to identify individuals at-risk of psychopathology during early adulthood.
Response styles are a source of contamination in questionnaire ratings, and therefore they threaten the validity of conclusions drawn from marketing research data. In this article, the authors examine five forms of stylistic responding (acquiescence and disacquiescence response styles, extreme response style/response range, midpoint responding, and noncontingent responding) and discuss their biasing effects on scale scores and correlations between scales. Using data from large, representative samples of consumers from 11 countries of the European Union, the authors find systematic effects of response styles on scale scores as a function of two scale characteristics (the proportion of reverse-scored items and the extent of deviation of the scale mean from the midpoint of the response scale) and show that correlations between scales can be biased upward or downward depending on the correlation between the response style components. In combination with the apparent lack of concern with response styles evidenced in a secondary analysis of commonly used marketing scales, these findings suggest that marketing researchers should pay greater attention to the phenomenon of stylistic responding when constructing and using measurement instruments.
The literature on structural equation models is unclear on whether and when multicollinearity may pose problems in theory testing (Type II errors). Two Monte Carlo simulation experiments show that multicollinearity can cause problems under certain conditions, specifically: (1) when multicollinearity is extreme, Type II error rates are generally unacceptably high (over 80%), (2) when multicollinearity is between 0.6 and 0.8, Type II error rates can be substantial (greater than 50% and frequently above 80%) if composite reliability is weak, explained variance ( 2) is low, and sample size is relatively small. However, as reliability improves (0.80 or higher), explained variance 2 reaches 0.75, and sample becomes relatively large, Type II error rates become negligible. (3) When multicollinearity is between 0.4 and 0.5, Type II error rates tend to be quite small, except when reliability is weak, 2 is low, and sample size is small, in which case error rates can still be high (greater than 50%). Methods for detecting and correcting multicollinearity are briefly discussed. However, since multicollinearity is difficult to manage after the fact, researchers should avoid problems by carefully managing the factors known to mitigate multicollinearity problems (particularly measurement error).multicollinearity, measurement error, structural equation models
The authors investigate the overall and subarea influence of a comprehensive set of marketing and marketing-related journals at three points in time during a 30-year period using a citation-based measure of structural influence. The results show that a few journals wield a disproportionate amount of influence in the marketing journal network as a whole and that influential journals tend to derive their influence from many different journals. Different journals are most influential in different subareas of marketing; general business and managerially oriented journals have lost influence, whereas more specialized marketing journals have gained in influence over time. The Journal of Marketing emerges as the most influential marketing journal in the final period (1996–97) and as the journal with the broadest span of influence across all subareas. Yet the Journal of Marketing is notably influential among applied marketing journals, which themselves are of lesser influence. The index of structural influence is significantly correlated with other objective and subjective measures of influence but least so with the impact factors reported in the Social Sciences Citation Index. Overall, the findings demonstrate the rapid maturation of the marketing discipline and the changing role of key journals in the process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.