Publication bias distorts the available empirical evidence and misinforms policymaking. Evidence of publication bias is mounting in virtually all fields of empirical research. This paper proposes the endogenous kink (EK) meta‐regression model as a novel method of publication bias correction. The EK method fits a piecewise linear meta‐regression of the primary estimates on their standard errors, with a kink at the cutoff value of the standard error below which publication selection is unlikely. We provide a simple method of endogenously determining this cutoff value as a function of a first‐stage estimate of the true effect and an assumed threshold of statistical significance. Our Monte Carlo simulations show that EK is less biased and more efficient than other related regression‐based methods of publication bias correction in a variety of research conditions.
We consider the Whittle likelihood estimation of seasonal autoregressive fractionally integrated moving‐average models in the presence of an additional measurement error and show that the spectral maximum Whittle likelihood estimator is asymptotically normal. We illustrate by simulation that ignoring measurement errors may result in incorrect inference. Hence, it is pertinent to test for the presence of measurement errors, which we do by developing a likelihood ratio (LR) test within the framework of Whittle likelihood. We derive the non‐standard asymptotic null distribution of this LR test and the limiting distribution of LR test under a sequence of local alternatives. Because in practice, we do not know the order of the seasonal autoregressive fractionally integrated moving‐average model, we consider three modifications of the LR test that takes model uncertainty into account. We study the finite sample properties of the size and the power of the LR test and its modifications. The efficacy of the proposed approach is illustrated by a real‐life example.
Meta-studies are often conducted on empirical findings obtained from overlapping samples. Sample overlap is common in research fields that strongly rely on aggregated observational data (eg, economics and finance), where the same set of data may be used in several studies. More generally, sample overlap tends to occur whenever multiple estimates are sampled from the same study. We show analytically how failing to account for sample overlap causes high rates of false positives, especially for large meta-sample sizes. We propose a generalized-weights (GW) meta-estimator, which solves the sample overlap problem by explicitly modeling the variance-covariance matrix that describes the structure of dependence among estimates. We show how this matrix can be constructed from information that is usually available from basic sample descriptions in the primary studies (ie, sample sizes and number of overlapping observations). The GW meta-estimator amounts to weighting each empirical outcome according to its share of independent sampling information. We use Monte Carlo simulations to (a) demonstrate how the GW meta-estimator brings the rate of false positives to its nominal level, and (b) quantify the efficiency gains of the GW meta-estimator relative to standard meta-estimators. The GW meta-estimator is fairly straightforward to implement and can solve any case of sample overlap, within or between studies. Highlights • Meta-analyses are often conducted on empirical outcomes based on samples containing common observations. • Sample overlap induces a correlation structure among empirical outcomes that harms the statistical properties of meta-analysis methods. • We derive the analytic conditions under which sample overlap causes conventional meta-estimators to exhibit high rates of false positives. • We propose a generalized-weights (GW) solution to sample overlap, which involves approximating the variance-covariance matrix that describes the correlation structure between outcomes; we show how to construct this matrix from information typically reported in the primary studies.
Meta-analysis upweights studies reporting lower standard errors and hence more precision. But in empirical practice, notably in observational research, precision is not given to the researcher. Precision must be estimated, and thus can be p-hacked to achieve statistical significance. Simulations show that a modest dose of spurious precision creates a formidable problem for inverse-variance weighting and bias-correction methods based on the funnel plot. Selection models fail to solve the problem, and the simple mean can beat sophisticated estimators. Cures to publication bias may become worse than the disease. We introduce an approach that surmounts spuriousness: the Meta-Analysis Instrumental Variable Estimator (MAIVE), which employs inverse sample size as an instrument for reported variance.
In this article, we analyse whether tourism promotes economic growth using a general dynamic panel data model that incorporates individual and interactive fixed effects and allows for contemporaneous correlation in model innovations. The empirical study is based on quarterly series of GDP and tourist arrivals for 14 European countries covering the period from 1995 to 2019. Results indicate that the case for a positive long-run relationship between tourism and economic growth is rather weak, being slightly stronger for the period prior to the global economic and financial crisis from 2007 to 2010. When applying panel fractional cointegration techniques, we find evidence in favour of the tourism-led growth hypothesis (TLGH) for the full sample mainly for North European countries. For the pre-crisis period, on the other hand, we find evidence in favour of the TLGH for the relevant tourist destinations Spain and France.
1 This paper empirically analyses the determinants of foreign direct investment inflows into the Russian regions. This problem has become highly relevant for the necessary modernization of the Russian economy after the recent economic slowdown and sharp decrease in budget revenues. The authors model foreign direct investment flows with the use of the gravity approach according to which investment flows are positively correlated with the size of the investor's country as well as the size of the recipient region and are negatively correlated with the distance between investor and recipient. The empirical analysis is based on a constructed database consisting of the foreign direct investment flows from 179 investor countries into 78 Russian regions for the period 2006-2013. The authors apply the Poisson Pseudo Maximum Likelihood method and identify the following factors determining foreign direct investment inflows into the Russian economy: the gross domestic product of the investor's country, the gross domestic product per capita in the recipient region, the distance from the investor to Moscow, the openness of the region, the economic situation in the region
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.