Can the vague meanings of probability terms such as doubtful, probable, or likely be expressed as membership functions over the [0, 1] probability interval? A function for a given term would assign a membership value of /ero to probabilities not at all in the vague concept represented by the term, a membership value of one to probabilities definitely in the concept, and intermediate membership values to probabilities represented by the term to some degree. A modified pair-comparison procedure was used in two experiments to empirically establish and assess membership functions for several probability terms. Subjects performed two tasks in both experiments: They judged (a) to what degree one probability rather than another was better described by a given probability term, and (b) to what degree one term rather than another better described a specified probability. Probabilities were displayed as relative areas on spinners. Task a data were analyzed from the perspective of conjoint-measurement theory, and membership function values were obtained for each term according to various scaling models. The conjoint-measurement axioms were well satisfied and goodness-of-fit measures for the scaling procedures were high. Individual differences were large but stable. Furthermore, the derived membership function values satisfactorily predicted the judgments independently obtained in task b. The results support the claim that the scaled values represented the vague meanings of the terms to the individual subjects in the present experimental context. Methodological implications are discussed, as are substantive issues raised by the data regarding the vague meanings of probability terms. Most people, including expert forecasters, generally prefer communicating their uncertain opinions with nonnumerical terms such as doubtful, probable, slight chance, very likely, and so forth, rather than with numerical probabilities. On anecdotal grounds, the imprecision of nonnumerical terms is preferred to the precision of probability numbers for at least two reasons: First, opinions are generally not precise and therefore, the claim goes, it would be misleading to represent them precisely. For example, commenting that numbers denote authority and a precise understanding of relations, a committee of the U.S. Na-This research was supported by Contract MDA 903-83-K-0347 from the U.S. Army Research Institute for the Behavioral and Social Sciences to the L. L. Thurstone Psychometric Laboratory, University of North Carolina at Chapel Hill. The views, opinions, and findings contained in this paper are those of the authors and should not be construed as an official Department of the Army position, policy, or decision. Barbara Forsyth is now at Ohio University. We thank Samuel Fillenbaum for numerous helpful discussions throughout the course of the work, and James Cox, Brent Cohen, Samuel Fillenbaum, and Jaan Valsiner for comments on a previous draft of this article.
This article models the cognitive processes underlying learning and sequential choice in a risk-taking task for the purposes of understanding how they occur in this moderately complex environment and how behavior in it relates to self-reported real-world risk taking. The best stochastic model assumes that participants incorrectly treat outcome probabilities as stationary, update probabilities in a Bayesian fashion, evaluate choice policies prior to rather than during responding, and maintain constant response sensitivity. The model parameter associated with subjective value of gains correlates well with external risk taking. Both the overall approach, which can be expanded as the basic paradigm is varied, and the specific results provide direction for theories of risky choice and for understanding risk taking as a public health problem.
Despite much disagreement regarding how probabilistic information is best communicated, virtually no research has been done to determine what communication modes people prefer or what factors affect their communication preferences. To address these issues, we did a survey of 442 graduate and undergraduate students in several specialties and universities. Some group differences emerged, but overall, 34% expressed preference for both conveying and receiving information about uncertainty in numerical rather than verbal form , 30% expressed the opposite preferences, and 35% indicated that they preferred to receive such information numerically but to convey it verbally. Generally, respondents who endorsed the use of verbal information said that it is easier to use , as well as more natural and personal. Those preferring numerical information said that it is more precise. Virtually all respondents, however, evidenced a willingness to use the opposite of their initially preferred mode if the situation should warrant it. The willingness to switch from one mode to another was said to depend on the level of precision implied by the data and the importance of the issue, as was suggested by Budescu and Wallsten (1987). These results may be helpful in structuring risk communication strategies.The importance of risk communication has increased dramatically in recent years as the public has become more aware of and interested in environmental and medical issues that affect individuals and society. Although much has been written about the best modes for communicating with individuals about uncertainty, little research has been aimed at determining what modes people prefer or what factors affect their preferences. We present survey results relevant to these questions.To set the stage , we will review the issues very briefly. Although decision and risk analysis are frequently done in terms of estimated or judged probabilities (Morgan & Henrion, 1990;von Winterfeldt & Edwards, 1986), the risk communication literature is virtually unanimous in stating that the presentation of statistical information alone is insufficient for communicating with the public (Fisher, 1991; Linnerooth-Bayer & Wahlstroorn, 1991;National This research was supported by National Science Foundation Grants BNS8608692 and BNS8908554 . We thankAnn Fisher and Baruch Fischhoff for comments on an earlier draft . R.Z . is in the Department of Marketing at Penn sylvania State . Correspondence should be addressed to T . S. Wallsten , Department of Psychology, Univer sity of North Carolina, Chapel Hill , NC 27599 -3270.Research Council, 1981;Slovic, 1986) or even for experts' communications to decision makers (Ruckelshaus, 1984). For example, The National Research Council (1981) wrote that It is usually dangerous for messages to characterize the overall level of uncertainty quantitatively, as might be done by describing statistical confidence intervals. In most situations expert assessments have multiple sources of uncertainty , and statistical measures do not adequat...
Sequential risk-taking tasks, especially the Balloon Analogue Risk Task (BART), have proven powerful and useful methods in studying and identifying real-world risk takers. A natural index in these tasks is the average number of risks the participant takes in a trial (e.g., pumps on the balloons), but this is difficult to estimate because some trials terminate early because of the consequences of those risks (e.g., when the desired number of balloon pumps exceeds the explosion point). The standard corrective strategy is to use an adjusted score that ignores such event-terminated trials. Although previous data supports the utility of this adjusted score, the authors show formally that it is biased. Therefore, the authors developed an automatic response procedure, in which respondents state at the beginning of each trial how many risks they wish to take and then observe the sequence of events unfold. A study comparing this new automatic and the original manual BART shows that the automatic procedure yields unbiased statistics whereas maintaining the BART's predictive validity of substance use. The authors also found that providing respondents with the expected-value-maximizing strategy and complete trial-by-trial feedback increased the number of risks they were willing to take during the BART. The authors interpret these results in terms of the potential utility of the automatic version including shorter administration time, unbiased behavioral measures, and minimizing motor involvement, which is important in neuroscientific investigations or with clinical populations with motor limitations.
A two-stage within subjects design was used to compare decisions based on numerically and verbally expressed probabilities. In Stage 1, subjects determined approximate equivalences between vague probability expressions, numerical probabilities, and graphical displays. Subsequently, in Stage 2 they bid for (Experiment 1J or rated (Experiment 2) gambles based on the previously equated verbal, numerical, and graphical descriptors. In Stage 1, numerical and verbal judgments were reliable, internally consistent, and monotonically related to the displayed probabilities. However, the numerical judgments were significantly superior in all respects because they were much less variable within and between subjects. In Stage 2, response times, bids, and ratings were inconsistent with both of two opposing sets of predictions, one assuming that imprecise gambles will be avoided and the other that verbal probabilities will be preferred. The entire pattern of results is explained by means of a general model of decision making with vague probabilities which assumes that in the present task, when presented with a vague probability word, people focus on an implied probability interval and sample values within it to resolve the vagueness prior to forming a bid or a rating.Subjective probability (SP) is a basic concept in all models of individual decision making under uncertainty. In one class of models, functions of SP are used to weight the utilities, or values, of the basic outcomes to yield a global assessment of goodness for each alternative. This class includes the traditional Subjectively Expected Utility model (Savage, 1954) and a large variety of more recent generalizations and refinements such as the Subjectively Weighted Utility (Karmarkar, 1978), Certainty Equivalence (Handa, 1977), Prospect Theory (Kahneman & Tversky, 1979), and Anticipated Utility (Quiggin, 1982) models. Obviously, such models require the SPs to take real numerical values bounded by 0 and 1 that satisfy certain consistency or coherence conditions. Thus, SP is considered to represent a mapping of an individual's subjective beliefs into the real numbers.In another class of models, SP and outcome utilities are treated as separate dimensions, and alternatives are compared on a dimensional rather than a global basis (Payne, 1976; Russo & Dosher, 1983;Tversky, 1969). In these models, too, SP is treated as a mapping of subjective uncertainty onto the real numbers.The numerous decision models that assume numerical representation of uncertainty are in sharp contrast with the fact that people generally prefer to express their beliefs by means of natural language. Several reasons have been cited for the distinct preference of words over numbers (see also
This article models the cognitive processes underlying learning and sequential choice in a risk-taking task for the purposes of understanding how they occur in this moderately complex environment and how behavior in it relates to self-reported real-world risk taking. The best stochastic model assumes that participants incorrectly treat outcome probabilities as stationary, update probabilities in a Bayesian fashion, evaluate choice policies prior to rather than during responding, and maintain constant response sensitivity. The model parameter associated with subjective value of gains correlates well with external risk taking. Both the overall approach, which can be expanded as the basic paradigm is varied, and the specific results provide direction for theories of risky choice and for understanding risk taking as a public health problem.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.