Meta-analysis is essential for cumulative science, but its validity is compromised by publication bias. In order to mitigate the impact of publication bias, one may apply selection models, which estimate the degree to which non-significant studies are suppressed. Implemented in JASP, these methods allow researchers without programming experience to conduct state-of-the-art publication bias adjusted meta-analysis. In this tutorial, we demonstrate how to conduct a publication bias adjusted meta-analysis in JASP and interpret the results. First, we explain how frequentist selection models correct for publication bias. Second, we introduce Robust Bayesian Meta-Analysis (RoBMA), a Bayesian extension of the frequentist selection models. We illustrate the methodology with two data sets and discuss the interpretation of the results. In addition, we include example text to provide concrete guidance on reporting the meta-analytic results in an academic article. Finally, three tutorial videos are available at https://tinyurl.com/y4g2yodc.
Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component –the Bayes factor– that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of eleven ‘issues’ of NHBT. In this commentary, we provide context to each issue and conclude that many issues may in fact be conceived as pronounced advantages of NHBT.
No abstract
In a sequential hypothesis test, the analyst checks at multiple steps during data collectionwhether sufficient evidence has accrued to make a decision about the tested hypotheses.As soon as sufficient information has been obtained, data collection is terminated. Here,we compare two sequential hypothesis testing procedures that have recently been proposedfor use in psychological research: the Sequential Probability Ratio Test (SPRT; Schnuerch& Erdfelder, 2020) and the Sequential Bayes Factor Test (SBFT; Schönbrodt et al., 2017).We show that although the two methods have been presented as distinct methodologies inthe past, they share many similarities and can even be regarded as two instances of thesame overarching hypothesis testing framework. We demonstrate that the two methods usethe same mechanisms for evidence monitoring and error control, and that differences inefficiency between the methods depend on the exact specification of the statistical modelsinvolved. Given the close relationship between the SPRT and SBFT, we argue that thechoice of the sequential testing method should be regarded as a continuous choice withina unified framework rather than a dichotomous choice between two methods. We presentseveral considerations researchers can make to navigate the design decisions in the SPRTand SBFT.
Compared to the relatively standard way of conducting null hypothesis significance testing, there seem to be fairly large differences in opinion among experts in Bayesian statistics on how best to conduct Bayesian inference. Employing Bayesian methods involves making choices about prior distributions, likelihood functions, and robustness checks, as well as on how to report, visualize, and interpret the results. This wide range of choices might make it daunting for social scientists to make the transition to conducting Bayesian inference in their own research. In this review, we conducted an expert survey in which nine of the most prominent Bayesian statisticians in the behavioural sciences shared their thinking on seven key choices that need to be made when conducting and reporting Bayesian inference. This paper highlights the areas of their agreements and the arguments behind their disagreements. The results of an iterative survey show experts agree on many more topics than they disagree on. The overall message is that instead of following rituals, researchers should understand the reasoning behind the different positions and make their choices on a case by case basis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.