In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record
Multilevel models are an increasingly popular method to analyze data that originate from a clustered or hierarchical structure. To effectively utilize multilevel models, one must have an adequately large number of clusters; otherwise, some model parameters will be estimated with bias. The goals for this paper are to (1) raise awareness of the problems associated with a small number of clusters, (2) review previous studies on multilevel models with a small number of clusters, (3) to provide an illustrative simulation to demonstrate how a simple model becomes adversely affected by small numbers of clusters, (4) to provide researchers with remedies if they encounter clustered data with a small number of clusters, and (5) to outline methodological topics that have yet to be addressed in the literature.Keywords Multilevel model . HLM . Small sample . Mixed model . Small number of clusters Frequently in educational psychology research, observations have a hierarchical structure (Raudenbush and Bryk 2002). Students are nested within classrooms; children are nested within families, or teachers are nested within schools. When data are sampled in a multi-stage manner or if observations are clustered, modeling data by ignoring the clustering will often result in standard error estimates that are underestimated if the outcome variable demonstrates dependence based on the clustering (i.e., the intraclass correlation is greater than zero). When clustering is ignored, the residuals will not be identically and independently distributed, violating an assumption of single-level models such as ordinary least-squares regression. This dependence will ultimately result in an inflated type-I error rate for significance tests of regression coefficients. However, in the statistical literature, methods have been developed for addressing data that come from a hierarchical structure and can account for the dependence among observations. One such method has many names and acronyms but is often referred to as hierarchical linear models (HLMs), multilevel models (MLMs, used in this paper), or mixed-effects models (Raudenbush and Bryk 2002). This is the method on which this paper will focus.To estimate MLMs without bias, adequate sample sizes must be obtained, since MLMs are often estimated with maximum likelihood (ML) methods. ML estimates are asymptomatically Educ Psychol Rev
Small-sample inference with clustered data has received increased attention recently in the methodological literature, with several simulation studies being presented on the small-sample behavior of many methods. However, nearly all previous studies focus on a single class of methods (e.g., only multilevel models, only corrections to sandwich estimators), and the differential performance of various methods that can be implemented to accommodate clustered data with very few clusters is largely unknown, potentially due to the rigid disciplinary preferences. Furthermore, a majority of these studies focus on scenarios with 15 or more clusters and feature unrealistically simple data-generation models with very few predictors. This article, motivated by an applied educational psychology cluster randomized trial, presents a simulation study that simultaneously addresses the extreme small sample and differential performance (estimation bias, Type I error rates, and relative power) of 12 methods to account for clustered data with a model that features a more realistic number of predictors. The motivating data are then modeled with each method, and results are compared. Results show that generalized estimating equations perform poorly; the choice of Bayesian prior distributions affects performance; and fixed effect models perform quite well. Limitations and implications for applications are also discussed.
We present types of constructs, individual- and cluster-level, and their confirmatory factor analytic validation models when data are from individuals nested within clusters. When a construct is theoretically individual level, spurious construct-irrelevant dependency in the data may appear to signal cluster-level dependency; in such cases, however, and consistent with theory, a single-level analysis with a correction for dependency may be appropriate. Regarding cluster-level constructs, we discuss two types—shared and configural—and present appropriate validation models. Illustrative validation analyses with individual, shared, and configural constructs are provided using empirical data as well as simple simulations demonstrating the spurious effects that can occur with nested data. The article concludes with future directions to be examined in construct validation in multilevel settings.
Research on job burnout has traditionally focused on contextual antecedent conditions, although a theoretically appropriate conception implicates person-environment relationships. The authors tested several models featuring various combinations of personal and contextual influences on burnout and job satisfaction. Measures of core self-evaluations, organizational constraints, burnout, and job satisfaction were collected from 859 health care employees. Results from structural equations modeling analyses revealed an influence of core self-evaluations and perceived organizational constraints on job burnout and satisfaction, suggesting personal and contextual contributions. These results favor a broadening of current thinking about the impact of situational constraints on the expression of job burnout, as well as for the role of disposition for affective responding to effectively address occupational health problems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.