Publication bias distorts the available empirical evidence and misinforms policymaking. Evidence of publication bias is mounting in virtually all fields of empirical research. This paper proposes the endogenous kink (EK) meta‐regression model as a novel method of publication bias correction. The EK method fits a piecewise linear meta‐regression of the primary estimates on their standard errors, with a kink at the cutoff value of the standard error below which publication selection is unlikely. We provide a simple method of endogenously determining this cutoff value as a function of a first‐stage estimate of the true effect and an assumed threshold of statistical significance. Our Monte Carlo simulations show that EK is less biased and more efficient than other related regression‐based methods of publication bias correction in a variety of research conditions.
We consider the Whittle likelihood estimation of seasonal autoregressive fractionally integrated moving‐average models in the presence of an additional measurement error and show that the spectral maximum Whittle likelihood estimator is asymptotically normal. We illustrate by simulation that ignoring measurement errors may result in incorrect inference. Hence, it is pertinent to test for the presence of measurement errors, which we do by developing a likelihood ratio (LR) test within the framework of Whittle likelihood. We derive the non‐standard asymptotic null distribution of this LR test and the limiting distribution of LR test under a sequence of local alternatives. Because in practice, we do not know the order of the seasonal autoregressive fractionally integrated moving‐average model, we consider three modifications of the LR test that takes model uncertainty into account. We study the finite sample properties of the size and the power of the LR test and its modifications. The efficacy of the proposed approach is illustrated by a real‐life example.
Meta-studies are often conducted on empirical findings obtained from overlapping samples. Sample overlap is common in research fields that strongly rely on aggregated observational data (eg, economics and finance), where the same set of data may be used in several studies. More generally, sample overlap tends to occur whenever multiple estimates are sampled from the same study. We show analytically how failing to account for sample overlap causes high rates of false positives, especially for large meta-sample sizes. We propose a generalized-weights (GW) meta-estimator, which solves the sample overlap problem by explicitly modeling the variance-covariance matrix that describes the structure of dependence among estimates. We show how this matrix can be constructed from information that is usually available from basic sample descriptions in the primary studies (ie, sample sizes and number of overlapping observations). The GW meta-estimator amounts to weighting each empirical outcome according to its share of independent sampling information. We use Monte Carlo simulations to (a) demonstrate how the GW meta-estimator brings the rate of false positives to its nominal level, and (b) quantify the efficiency gains of the GW meta-estimator relative to standard meta-estimators. The GW meta-estimator is fairly straightforward to implement and can solve any case of sample overlap, within or between studies. Highlights • Meta-analyses are often conducted on empirical outcomes based on samples containing common observations. • Sample overlap induces a correlation structure among empirical outcomes that harms the statistical properties of meta-analysis methods. • We derive the analytic conditions under which sample overlap causes conventional meta-estimators to exhibit high rates of false positives. • We propose a generalized-weights (GW) solution to sample overlap, which involves approximating the variance-covariance matrix that describes the correlation structure between outcomes; we show how to construct this matrix from information typically reported in the primary studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.