Background In clinical research, populations are often selected on the sum-score of diagnostic criteria such as symptoms. Estimating statistical models where a subset of the data is selected based on a function of the analyzed variables introduces Berkson's bias, which presents a potential threat to the validity of findings in the clinical literature. The aim of the present paper is to investigate the effect of Berkson's bias on the performance of the two most commonly used psychological network models: the Gaussian Graphical Model (GGM) for continuous and ordinal data, and the Ising Model for binary data. Methods In two simulation studies, we test how well the two models recover a true network structure when estimation is based on a subset of the data typically seen in clinical studies. The network is based on a dataset of 2807 patients diagnosed with major depression, and nodes in the network are items from the Hamilton Rating Scale for Depression (HRSD). The simulation studies test different scenarios by varying (1) sample size and (2) the cut-off value of the sum-score which governs the selection of participants. Results The results of both studies indicate that higher cut-off values are associated with worse recovery of the network structure. As expected from the Berkson's bias literature, selection reduced recovery rates by inducing negative connections between the items. Conclusion Our findings provide evidence that Berkson's bias is a considerable and underappreciated problem in the clinical network literature. Furthermore, we discuss potential solutions to circumvent Berkson's bias and their pitfalls.
Network psychometrics is a new direction in psychological research that conceptualizes multivariate data as interacting systems. Variables are represented as nodes and their interactions yield (partial) associations. Current estimation methods mostly use a frequentist approach, which does not allow for proper uncertainty quantification of the model and its parameters. Here, we outline a Bayesian approach to network analysis that offers three main benefits. In particular, applied researchers can use Bayesian methods to (1) determine structure uncertainty, (2) obtain evidence for edge inclusion and exclusion (i.e., distinguish conditional (in)dependence between variables), and (3) quantify parameter precision. The paper provides a conceptual introduction to Bayesian inference and describes how researchers can facilitate the three benefits for networks. Furthermore, we review the available R packages for the Bayesian analysis of networks and introduce a new implementation in the open-source, user-friendly software JASP. The methodology is illustrated with a worked-out example of a network of personality traits and mental health.
Scientific theories reflect some of humanity's greatest epistemic achievements. The best theories motivate us to search for discoveries, guide us towards successful interventions, and help us to explain and organize knowledge. Such theories require a high degree of specificity, and specifying them requires modeling skills. Unfortunately, in psychological science, theories are often not precise, and psychological scientists often lack the technical skills to formally specify existing theories. This problem raises the question: How can we promote formal theory development in psychology, where there are many content experts but few modelers? In this paper, we discuss one strategy for addressing this issue: a Many Modelers approach. Many Modelers consist of mixed teams of modelers and non-modelers that collaborate to create a formal theory of a phenomenon. We report a proof of concept of this approach, which we piloted as a three-hour hackathon at the SIPS 2021 conference. We find that (a) psychologists who have never developed a formal model can become excited about formal modeling and theorizing; (b) a division of labor in formal theorizing could be possible where only one or a few team members possess the prerequisite modeling expertise; and (c) first working prototypes of a theoretical model can be created in a short period of time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.