The mathematical representation of Brunswik's lens model has been used extensively to study human judgment and provides a unique opportunity to conduct a meta-analysis of studies that covers roughly five decades. Specifically, we analyze statistics of the "lens model equation" (Tucker, 1964) associated with 259 different task environments obtained from 78 papers. In short, we find -on average -fairly high levels of judgmental achievement and note that people can achieve similar levels of cognitive performance in both noisy and predictable environments. Although overall performance varies little between laboratory and field studies, both differ in terms of components of performance and types of environments (numbers of cues and redundancy). An analysis of learning studies reveals that the most effective form of feedback is information about the task. We also analyze empirically when bootstrapping is more likely to occur. We conclude by indicating shortcomings of the kinds of studies conducted to date, limitations in the lens model methodology, and possibilities for future research.Keywords: judgment, lens model, linear models, learning, bootstrapping Determinants of linear judgment 3 Since the 1960s, many psychologists have used the framework of Brunswik's (1952) lens model to study processes where humans make predictions of specific criteria (see, e.g., Brehmer & Joyce, 1988;Cooksey, 1996;Hastie & Kameda, 2005). For example, a person might make a judgment (i.e., prediction) about another person's intelligence, about the likelihood of rain, whether a job candidate will be successful, and so on. In all these cases, the simple beauty of Brunswik's model lies in recognizing that both the person's judgment and the actual criterion predicted can be thought of as two separate functions of cues that are available in the environment. Thus, the accuracy of human judgment depends on the extent to which the function that describes it matches its environmental counterpart.But how good or accurate are people at making judgments and on what does this depend? These are important questions that have generated considerable controversy in the psychological literature (Cohen, 1981;Gigerenzer, 1996;Kahneman & Tversky, 1996). Whereas it is unlikely that these questions can be answered satisfactorily by any particular approach, an advantage of research conducted within the Brunswikian tradition is the use of a common methodology for formalizing the lens model. Thus, not only can researchers within this tradition communicate results within a common framework, it is possible to aggregate results quantitatively across many studies and make statements that reflect the accumulation of results. This is the purpose of the current paper in which we present a meta-analysis of studies conducted using the lens model over a period of five decades.The paper is organized as follows. We first describe the mathematical formulation of the lens model. Second, we specify how we identified and included particular studies Determinants of linear ju...
The mathematical representation of E. Brunswik's (1952) lens model has been used extensively to study human judgment and provides a unique opportunity to conduct a meta-analysis of studies that covers roughly 5 decades. Specifically, the authors analyzed statistics of the "lens model equation" (L. R. Tucker, 1964) associated with 249 different task environments obtained from 86 articles. On average, fairly high levels of judgmental achievement were found, and people were seen to be capable of achieving similar levels of cognitive performance in noisy and predictable environments. Further, the effects of task characteristics that influence judgment (numbers and types of cues, inter-cue redundancy, function forms and cue weights in the ecology, laboratory versus field studies, and experience with the task) were identified and estimated. A detailed analysis of learning studies revealed that the most effective form of feedback was information about the task. The authors also analyzed empirically under what conditions the application of bootstrapping--or replacing judges by their linear models--is advantageous. Finally, the authors note shortcomings of the kinds of studies conducted to date, limitations in the lens model methodology, and possibilities for future research.
Much research has highlighted incoherent implications of judgmental heuristics, yet other findings have demonstrated high correspondence between predictions and outcomes. At the same time, judgment has been well modeled in the form of as if linear models. Accepting the probabilistic nature of the environment, the authors use statistical tools to model how the performance of heuristic rules varies as a function of environmental characteristics. They further characterize the human use of linear models by exploring effects of different levels of cognitive ability. They illustrate with both theoretical analyses and simulations. Results are linked to the empirical literature by a meta-analysis of lens model studies. Using the same tasks, the authors estimate the performance of both heuristics and humans where the latter are assumed to use linear models. Their results emphasize that judgmental accuracy depends on matching characteristics of rules and environments and highlight the trade-off between using linear models and heuristics. Whereas the former can be cognitively demanding, the latter are simple to implement. However, heuristics require knowledge to indicate when they should be used.
Excess entry-or the high failure rate of market-entry decisions-is often attributed to overconfidence exhibited by entrepreneurs. We show analytically that whereas excess entry is an inevitable consequence of imperfect assessments of entrepreneurial skill, it does not imply overconfidence. Judgmental fallibility leads to excess entry even when everyone is underconfident. Self-selection implies greater confidence (but not necessarily overconfidence) among those who start new businesses than those who do not and among successful entrants than failures. Our results question claims that "entrepreneurs are overconfident" and emphasize the need to understand the role of judgmental fallibility in producing economic outcomes.
Given the difficulties people experience in making trade-offs, what are the consequences of using simple models that avoid trade-offs? We examine choices by such models in environments where "true" preferences are linear and attributes are characterized by binary attributes. A deterministic elimination-by-aspects (DEBA) model is highly effective over a range of conditions. When preferences are quite compensatory, however, a modified equal weighting (EW) model that uses DEBA to resolve ties is more effective. We explore the sensitivity of results to errors in using DEBA, to different distributions of alternatives, and to error in "true" preferences. Under the conditions examined here, the outcomes of these "boundedly rational" models are highly consistent with "rational" models that explicitly confront trade-offs. We emphasize the importance of binary attributes in reaching these conclusions.
When can a single variable be more accurate in binary choice than multiple sources of information? We derive analytically the probability that a single variable (SV) will correctly predict one of two choices when both criterion and predictor are continuous variables. We further provide analogous derivations for multiple regression (MR) and equal weighting (EW) and specify the conditions under which the models differ in expected predictive ability. Key factors include variability in cue validities, intercorrelation between predictors, and the ratio of predictors to observations in MR. Theory and simulations are used to illustrate the differential effects of these factors. Results directly address why and when ''one-reason'' decision making can be more effective than analyses that use more information. We thus provide analytical backing to intriguing empirical results that, to date, have lacked theoretical justification. There are predictable conditions for which one should expect ''less to be more.'' r 2005 Elsevier Inc. All rights reserved.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.