Risky prospects come in different forms. Sometimes options are presented with convenient descriptions summarizing outcomes and their respective likelihoods. People can thus make decisions from description. In other cases people must call on their encounters with such prospects, making decisions from experience. Recent studies report a systematic and large description-experience gap. One key determinant of this gap is people's tendency to rely on small samples resulting in substantial sampling error. Here we examine whether this gap exists even when people draw on large samples. Although smaller, the gap persists. We use the choices of the present and previous studies to test a large set of candidate strategies that model decisions from experience, including 12 heuristics, two associative-learning models and the two-stage model of cumulative prospect theory. This model analysis suggests-as one explanation for the remaining description-experience gap in large samples-that people treat probabilities differently in both types of decisions.
Abstract.One of the central questions addressed in the project READY was that of how a system can automatically recognize situationally determined resource limitations of its user-in particular, time pressure and cognitive load. This chapter summarizes most of the work done in READY on this topic, presenting as well some previously unpublished results. We first consider why on-line recognition or resource limitations can be useful by discussing the ways in which a system might adapt its behavior to perceived resource limitations. We then summarize a number of approaches to the recognition problem that have been taken in READY and other projects, before focusing on one particular approach: the analysis of features of a user's speech. In each of two similarly structured experiments, we created four experimental conditions that varied in terms of whether the user was (a) required to produce spoken utterances quickly or not; and (b) navigating within a simulated airport terminal or standing still. In the second experiment, additional distraction was caused by continuous loudspeaker announcements. The speech produced by the experimental subjects (32 in each experiment) was coded in terms of 7 variables. We report on the extent to which each of these variables was influenced by the subjects' resource limitations. We also trained dynamic Bayesian networks on the resulting data in order to see how well the information in the users' speech could serve as evidence as to which condition the user had been in. The results yield information about the accuracy that can be attained in this way and about the diagnostic value of some specific features of speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.