In applied settings, such as aviation, medicine, and finance, individuals make decisions under various degrees of uncertainty, that is, when not all risks are known or can be calculated. In such situations, decisions can be made using fast-and-frugal heuristics. These are simple strategies that ignore part of the available information. In this article, we propose that the conceptual lens of fast-and-frugal heuristics is useful not only for describing but also for improving applied decision making. By exploiting features of the environment and capabilities of the decision makers, heuristics can be simple without trading off accuracy. Because decision aids based on heuristics build on how individuals make decisions, they can be adopted intuitively and used effectively. Beyond enabling accurate decisions, heuristics possess characteristics that facilitate their adaptation to varied settings. These characteristics include accessibility, speed, transparency, and cost effectiveness. Altogether, the article offers an overview of the literature on fast-and-frugal heuristics and their usefulness in diverse applied settings.
In Bayesian inference tasks, information about base rates as well as hit rate and false-alarm rate needs to be integrated according to Bayes’ rule after the result of a diagnostic test became known. Numerous studies have found that presenting information in a Bayesian inference task in terms of natural frequencies leads to better performance compared to variants with information presented in terms of probabilities or percentages. Natural frequencies are the tallies in a natural sample in which hit rate and false-alarm rate are not normalized with respect to base rates. The present research replicates the beneficial effect of natural frequencies with four tasks from the domain of management, and with management students as well as experienced executives as participants. The percentage of Bayesian responses was almost twice as high when information was presented in natural frequencies compared to a presentation in terms of percentages. In contrast to most tasks previously studied, the majority of numerical responses were lower than the Bayesian solutions. Having heard of Bayes’ rule prior to the study did not affect Bayesian performance. An implication of our work is that textbooks explaining Bayes’ rule should teach how to represent information in terms of natural frequencies instead of how to plug probabilities or percentages into a formula.
In research on Bayesian inferences, the specific tasks, with their narratives and characteristics, are typically seen as exchangeable vehicles that merely transport the structure of the problem to research participants. In the present paper, we explore whether, and possibly how, task characteristics that are usually ignored influence participants’ responses in these tasks. We focus on both quantitative dimensions of the tasks, such as their base rates, hit rates, and false-alarm rates, as well as qualitative characteristics, such as whether the task involves a norm violation or not, whether the stakes are high or low, and whether the focus is on the individual case or on the numbers. Using a data set of 19 different tasks presented to 500 different participants who provided a total of 1,773 responses, we analyze these responses in two ways: first, on the level of the numerical estimates themselves, and second, on the level of various response strategies, Bayesian and non-Bayesian, that might have produced the estimates. We identified various contingencies, and most of the task characteristics had an influence on participants’ responses. Typically, this influence has been stronger when the numerical information in the tasks was presented in terms of probabilities or percentages, compared to natural frequencies – and this effect cannot be fully explained by a higher proportion of Bayesian responses when natural frequencies were used. One characteristic that did not seem to influence participants’ response strategy was the numerical value of the Bayesian solution itself. Our exploratory study is a first step toward an ecological analysis of Bayesian inferences, and highlights new avenues for future research.
This initiative examined systematically the extent to which a large set of archival research findings generalizes across contexts. We repeated the key analyses for 29 original strategic management effects in the same context (direct reproduction) as well as in 52 novel time periods and geographies; 45% of the reproductions returned results matching the original reports together with 55% of tests in different spans of years and 40% of tests in novel geographies. Some original findings were associated with multiple new tests. Reproducibility was the best predictor of generalizability—for the findings that proved directly reproducible, 84% emerged in other available time periods and 57% emerged in other geographies. Overall, only limited empirical evidence emerged for context sensitivity. In a forecasting survey, independent scientists were able to anticipate which effects would find support in tests in new samples.
Whether people compete or cooperate with each other has consequences for their own performance and that of organizations. To explain why people compete or cooperate, previous research has focused on two main factors: situational outcome structures and personality types. Here, we propose that—above and beyond these two factors—situational cues, such as the format in which people receive feedback, strongly affect whether they act competitively, cooperatively, or individualistically. Results of a laboratory experiment support our theorizing: After receiving ranking feedback, both students and experienced managers treated group situations with cooperative outcome structures as competitive and were in consequence willing to forgo guaranteed financial gains to pursue a—financially irrelevant—better rank. Conversely, in dilemma situations, feedback based on the joint group outcome led to more cooperation than ranking feedback. Our study contributes to research on competition, cooperation, interdependence theory, forced ranking, and the design of information environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.