Summary Thought experiments have been used as an effective methodological approach to advance theory in numerous scientific fields. However, they are underutilized in organizational behavior (OB) and adjacent fields. Accordingly, we conducted a comprehensive and multidisciplinary literature review of thought experiments that entailed 174 sources in economics, psychology, marketing, medicine, sociology, finance, and other fields. We used insights from this literature review to define and describe the unique nature of thought experiments and offer a taxonomy of four main types based on a theory's development stage (i.e., early vs. late) and a study's theoretical goal (i.e., confirmation vs. disconfirmation). We also provide a decision‐making tree useful for evaluating whether conducting a thought experiment is beneficial for a particular research situation and which of the four types is most likely to produce a meaningful contribution. Then, we offer best‐practice recommendations for conducting thought experiments that address how to plan, execute, report results, and discuss implications. In addition, we demonstrate the potential of thought experiments by using the best‐practice recommendations to design and conduct a thought experiment in the domain of workplace allyship. Finally, we offer suggestions for future substantive research that would benefit from thought experiment methodology (i.e., diversity, equity, and inclusion; leadership; performance; selection and recruitment; teams; and turnover). Overall, our article offers a comprehensive review and recommendations that we hope will be a catalyst for using thought experiments to advance theory in OB and related fields.
Structural equation modeling (SEM) is a family of models where multivariate techniques are used to examine simultaneously complex relationships among variables. The goal of SEM is to evaluate the extent to which proposed relationships reflect the actual pattern of relationships present in the data. SEM users employ specialized software to develop a model, which then generates a model-implied covariance matrix. The model-implied covariance matrix is based on the user-defined theoretical model and represents the user’s beliefs about relationships among the variables. Guided by the user’s predefined constraints, SEM software employs a combination of factor analysis and regression to generate a set of parameters (often through maximum likelihood [ML] estimation) to create the model-implied covariance matrix, which represents the relationships between variables included in the model. Structural equation modeling capitalizes on the benefits of both factor analysis and path analytic techniques to address complex research questions. Structural equation modeling consists of six basic steps: model specification; identification; estimation; evaluation of model fit; model modification; and reporting of results. Conducting SEM analyses requires certain data considerations as data-related problems are often the reason for software failures. These considerations include sample size, data screening for multivariate normality, examining outliers and multicollinearity, and assessing missing data. Furthermore, three notable issues SEM users might encounter include common method variance, subjectivity and transparency, and alternative model testing. First, analyzing common method variance includes recognition of three types of variance: common variance (variance shared with the factor); specific variance (reliable variance not explained by common factors); and error variance (unreliable and inexplicable variation in the variable). Second, SEM still lacks clear guidelines for the modeling process which threatens replicability. Decisions are often subjective and based on the researcher’s preferences and knowledge of what is most appropriate for achieving the best overall model. Finally, reporting alternatives to the hypothesized model is another issue that SEM users should consider when analyzing structural equation models. When testing a hypothesized model, SEM users should consider alternative (nested) models derived from constraining or eliminating one or more paths in the hypothesized model. Alternative models offer several benefits; however, they should be driven and supported by existing theory. It is important for the researcher to clearly report and provide findings on the alternative model(s) tested. Common model-specific issues are often experienced by users of SEM. Heywood cases, nonidentification, and nonpositive definite matrices are among the most common issues. Heywood cases arise when negative variances or squared multiple correlations greater than 1.0 are found in the results. The researcher could resolve this by considering a small plausible value that could be used to constrain the residual. Non-positive definite matrices result from linear dependencies and/or correlations greater than 1.0. To address this, researchers can attempt to ensure all indicator variables are independent, inspect output manually for negative residual variances, evaluate if sample size is appropriate, or re-specify the proposed model. When used properly, structural equation modeling is a powerful tool that allows for the simultaneous testing of complex models.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.