Despite the voluminous evidence in support of the paradoxical finding that providing individuals with more options can be detrimental to choice, the question of whether and when large assortments impede choice remains open. Even though extant research has identified a variety of antecedents and consequences of choice overload, the findings of the individual studies fail to come together into a cohesive understanding of when large assortments can benefit choice and when they can be detrimental to choice. In a meta‐analysis of 99 observations (N = 7202) reported by prior research, we identify four key factors—choice set complexity, decision task difficulty, preference uncertainty, and decision goal—that moderate the impact of assortment size on choice overload. We further show that each of these four factors has a reliable and significant impact on choice overload, whereby higher levels of decision task difficulty, greater choice set complexity, higher preference uncertainty, and a more prominent, effort‐minimizing goal facilitate choice overload. We also find that four of the measures of choice overload used in prior research—satisfaction/confidence, regret, choice deferral, and switching likelihood—are equally powerful measures of choice overload and can be used interchangeably. Finally, we document that when moderating variables are taken into account the overall effect of assortment size on choice overload is significant—a finding counter to the data reported by prior meta‐analytic research.
A typical behavioral research paper features multiple studies of a common phenomenon that are analyzed solely in isolation. Because the studies are of a common phenomenon, this practice is inefficient and forgoes important benefits that can be obtained only by analyzing them jointly in a single-paper meta-analysis (SPM). To facilitate SPM, we introduce meta-analytic methodology that is userfriendly, widely applicable, and specially tailored to the SPM of the set of studies that appear in a typical behavioral research paper. Our SPM methodology provides important benefits for study summary, theory testing, and replicability that we illustrate via three case studies that include papers recently published in the Journal of Consumer Research and the Journal of Marketing Research. We advocate that authors of typical behavioral research papers use it to supplement the single-study analyses that independently examine the multiple studies in the body of their papers as well as the "qualitative meta-analysis" that verbally synthesizes the studies in the general discussion of their papers. When used as such, this requires only a minor modification of current practice. We provide an easy-to-use website that implements our SPM methodology.
We review and evaluate selection methods, a prominent class of techniques first proposed by Hedges (1984) that assess and adjust for publication bias in meta-analysis, via an extensive simulation study. Our simulation covers both restrictive settings as well as more realistic settings and proceeds across multiple metrics that assess different aspects of model performance. This evaluation is timely in light of two recently proposed approaches, the so-called p-curve and p-uniform approaches, that can be viewed as alternative implementations of the original Hedges selection method approach. We find that the p-curve and p-uniform approaches perform reasonably well but not as well as the original Hedges approach in the restrictive setting for which all three were designed. We also find they perform poorly in more realistic settings, whereas variants of the Hedges approach perform well. We conclude by urging caution in the application of selection methods: Given the idealistic model assumptions underlying selection methods and the sensitivity of population average effect size estimates to them, we advocate that selection methods should be used less for obtaining a single estimate that purports to adjust for publication bias ex post and more for sensitivity analysisthat is, exploring the range of estimates that result from assuming different forms of and severity of publication bias.
In this article, I show how item response models can be used to capture multiple response processes in psychological applications. Intuitive and analytical responses, agree–disagree answers, response refusals, socially desirable responding, differential item functioning, and choices among multiple options are considered. In each of these cases, I show that the response processes can be measured via pseudoitems derived from the observed responses. The estimation of these models via standard software programs that allow for missing data is also discussed. The article concludes with two detailed applications that illustrate the prevalence of multiple response processes.
The recently proposed class of item response tree models provides a flexible framework for modeling multiple response processes. This feature is particularly attractive for understanding how response styles may affect answers to attitudinal questions. Facilitating the disassociation of response styles and attitudinal traits, item response tree models can provide powerful process tests of how different response formats may affect the measurement of substantive traits. In an empirical study, 3 response formats were used to measure the 2-dimensional Personal Need for Structure traits. Different item response tree models are proposed to capture the response styles for each of the response formats. These models show that the response formats give rise to similar trait measures but different response-style effects. (PsycINFO Database Record
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.