This study explores the performance of classical methods for detecting publication bias, namely Egger's Regression test, Funnel Plot test, Begg's Rank Correlation and Trim and Fill method, in meta-analysis of studies that report multiple effects. Publication bias, outcome reporting bias, and a combination of both were generated. Egger's Regression and Funnel Plot test were extended to three-level models, and possible cutoffs for the 0 + estimator of the Trim and Fill method were explored. Furthermore, we checked whether the combination of results of several methods yielded a better control of Type I error rates. Results show that no method works well across all conditions, and that their performance depends mainly on the population effect size value and on the total variance.
In meta-analysis, study participants are nested within studies, leading to a multilevel data structure. The traditional random effects model can be considered as a model with a random study effect, but additional random effects can be added in order to account for dependent effects sizes within or across studies. The goal of this systematic review is three-fold. First, we will describe how multilevel models with multiple random effects (i.e., hierarchical three-, four-, five-level models and cross-classified random effects models) are applied in meta-analysis. Second, we will illustrate how in some specific three-level meta-analyses, a more sophisticated model could have been used to deal with additional dependencies in the data. Third and last, we will describe the distribution of the characteristics of multilevel meta-analyses (e.g., distribution of the number of outcomes across studies or which dependencies are typically modeled) so that future simulation studies can simulate more realistic conditions. Results showed that four-or five-level or cross-classified random effects models are not often used although they might account better for the meta-analytic data structure of the analyzed datasets. Also, we found that the simulation studies done on multilevel metaanalysis with multiple random factors could have used more realistic simulation factor conditions. The implications of these results are discussed, and further suggestions are given.
Although the results of the current review reveal that the methodological quality of the SCED meta-analyses has increased over time, still more efforts are needed to improve their methodological quality.
It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables' effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.
When (meta-)analyzing single-case experimental design (SCED) studies by means of hierarchical or multilevel modeling, applied researchers almost exclusively rely on the linear mixed model (LMM). This type of model assumes that the residuals are normally distributed. However, very often SCED studies consider outcomes of a discrete rather than a continuous nature, like counts, percentages or rates. In those cases the normality assumption does not hold. The LMM can be extended into a generalized linear mixed model (GLMM), which can account for the discrete nature of SCED count data. In this simulation study, we look at the effects of misspecifying an LMM for SCED count data simulated according to a GLMM. We compare the performance of a misspecified LMM and of a GLMM in terms of goodness of fit, fixed effect parameter recovery, type I error rate, and power. Because the LMM and the GLMM do not estimate identical fixed effects, we provide a transformation to compare the fixed effect parameter recovery. The results show that, compared to the GLMM, the LMM has worse performance in terms of goodness of fit and power. Performance in terms of fixed effect parameter recovery is equally good for both models, and in terms of type I error rate the LMM performs better than the GLMM. Finally, we provide some guidelines for applied researchers about aspects to consider when using an LMM for analyzing SCED count data.
Pain-related fear is typically associated with avoidance behavior and pain-related disability in youth with chronic pain. Youth with elevated pain-related fear have attenuated treatment responses; thus, targeted treatment is highly warranted. Evidence supporting graded in vivo exposure treatment (GET) for adults with chronic pain is considerable, but just emerging for youth. The current investigation represents the first sequential replicated and randomized single-case experimental phase design with multiple measures evaluating GET for youth with chronic pain, entitled GET Living. A cohort of 27 youth (81% female) with mixed chronic pain completed GET Living. For each participant, a no-treatment randomized baseline period was compared with GET Living and 3- and 6-month follow-ups. Daily changes in primary outcomes fear and avoidance and secondary outcomes pain catastrophizing, pain intensity, and pain acceptance were assessed using electronic diaries and subjected to descriptive and model-based inference analyses. Based on individual effect size calculations, a third of participants significantly improved by the end of treatment on fear, avoidance, and pain acceptance. By follow-up, over 80% of participants had improved across all primary and secondary outcomes. Model-based inference analysis results to examine the series of replicated cases were generally consistent. Improvements during GET Living was superior to the no-treatment randomized baseline period for avoidance, pain acceptance, and pain intensity, whereas fear and pain catastrophizing did not improve. All 5 outcomes emerged as significantly improved at 3- and 6-month follow-ups. The results of this replicated single-case experimental phase design support the effectiveness of graded exposure for youth with chronic pain and elevated pain-related fear avoidance.
Meta-analytic datasets can be large, especially when in primary studies multiple effect sizes are reported. The visualization of meta-analytic data is therefore useful to summarize data and understand information reported in primary studies. The gold standard figures in meta-analysis are forest and funnel plots. However, none of these plots can yet account for the existence of multiple effect sizes within primary studies. This manuscript describes extensions to the funnel plot, forest plot and caterpillar plot to adapt them to three-level meta-analyses. For forest plots, we propose to plot the study-specific effects and their precision, and to add additional confidence intervals that reflect the sampling variance of individual effect sizes. For caterpillar plots and funnel plots, we recommend to plot individual effect sizes and averaged study-effect sizes in two separate graphs. For the funnel plot, plotting separate graphs might improve the detection of both publication bias and/or selective outcome reporting bias.
In meta-analysis, primary studies often include multiple, dependent effect sizes. Several methods address this dependency, such as the multivariate approach, three-level models, and the robust variance estimation (RVE) method. As for today, most simulation studies that explore the performance of these methods have focused on the estimation of the overall effect size. However, researchers are sometimes interested in obtaining separate effect size estimates for different types of outcomes. A recent simulation study (Park & Beretvas, 2019) has compared the performance of the three-level approach and the RVE method in estimating outcome-specific effects when several effect sizes are reported for different types of outcomes within studies. The goal of this paper is to extend that study by incorporating additional simulation conditions and by exploring the performance of additional models, such as the multivariate model, a three-level model that specifies different study-effects for each type of outcome, a three-level model that specifies a common study-effect for all outcomes, and separate three-level models for each type of outcome. Additionally, we also tested whether the a posteriori application of the RV correction improves the standard error estimates and the 95% confidence intervals. Results show that the application of separate three-level models for each type of outcome is the only approach that consistently gives adequate standard error estimates. Also, the a posteriori application of the RV correction results in correct 95% confidence intervals in all models, even if they are misspecified, meaning that Type I error rate is adequate when the RV correction is implemented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.