Despite the widespread and rising popularity of structural equation modeling (SEM) in psychology, there is still much confusion surrounding how to choose an appropriate sample size for SEM. Currently available guidance primarily consists of sample-size rules of thumb that are not backed up by research and power analyses for detecting model misspecification. Missing from most current practices is power analysis for detecting a target effect (e.g., a regression coefficient between latent variables). In this article, we (a) distinguish power to detect model misspecification from power to detect a target effect, (b) report the results of a simulation study on power to detect a target regression coefficient in a three-predictor latent regression model, and (c) introduce a user-friendly Shiny app, pwrSEM, for conducting power analysis for detecting target effects in structural equation models.
Despite the widespread and rising popularity of structural equation modeling (SEM) in psychology, there is still much confusion surrounding how to choose an appropriate sample size for SEM. Currently available guidance primarily consists of sample size rules of thumb that are not backed up by research, and power analyses for detecting model misfit. Missing from most current practices is power analysis to detect a target effect (e.g., a regression coefficient between latent variables). In this paper we (a) distinguish power to detect model misspecification from power to detect a target effect, (b) report the results of a simulation study on power to detect a target regression coefficient in a 3-predictor latent regression model, and (c) introduce a Shiny app, pwrSEM, for user-friendly power analysis for detecting target effects in structural equation models.
Incremental validity testing (i.e., testing whether a focal predictor is associated with an outcome above and beyond a covariate) is common (e.g., 57% of Personal Relationships articles in 2017), yet it is fraught with conceptual and statistical problems. First, researchers often use it to overemphasize the novelty or counterintuitiveness of findings, which hinders cumulative understanding. Second, incremental validity testing requires that the focal predictor and the covariate represent separate constructs; researchers risk committing the “jangle fallacy” without such evidence. Third, the most common approach to incremental validity testing (i.e., standard multiple regression, 88% of articles) inflates Type I error and can produce invalid conclusions. This article also discusses the relevance of these issues to dyadic/longitudinal designs and offers concrete solutions.
Psychological research on empathy typically focuses on understanding its effects on empathizers and empathic targets. Little is known, however, about the effects of empathy beyond its dyadic context. Taking an extradyadic perspective, we examined how third-party observers evaluate empathizers. Seven experiments documented that observers' evaluations of empathizers depend on the target of empathy. Empathizers (vs. nonempathizers) of a stressful experience were respected/liked more when the empathic target was positive (e.g., children's hospital worker), but not when the target was negative (e.g., White supremacist; Experiments 1 and 2). Empathizers were respected/liked more when responding to a positive target who disclosed a positive experience (i.e., a personal accomplishment), but less when responding to a negative target who disclosed a positive experience (Experiment 3). These effects were partly, but not solely, attributable to the positivity of empathic responses (Experiment 4). Expressing empathy (vs. condemnation) toward a negative target resulted in less respect/liking when the disclosed experience was linked to the source of target valence (i.e., stress from White supremacist job; Experiments 5 through 7), but more respect/liking when the experience was unrelated to the source of target valence (i.e., stress from cancer; Experiment 7). Overall, empathizers were viewed as warmer, but to a lesser extent when responding to a negative target. These findings highlight the importance of considering the extradyadic impact of empathy and suggest that although people are often encouraged to empathize with disliked others, they are not always favored for doing so.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.