BackgroundPrognostic accuracy in palliative care is valued by patients, carers, and healthcare professionals. Previous reviews suggest clinicians are inaccurate at survival estimates, but have only reported the accuracy of estimates on patients with a cancer diagnosis.ObjectivesTo examine the accuracy of clinicians’ estimates of survival and to determine if any clinical profession is better at doing so than another.Data SourcesMEDLINE, Embase, CINAHL, and the Cochrane Database of Systematic Reviews and Trials. All databases were searched from the start of the database up to June 2015. Reference lists of eligible articles were also checked.Eligibility CriteriaInclusion criteria: patients over 18, palliative population and setting, quantifiable estimate based on real patients, full publication written in English. Exclusion criteria: if the estimate was following an intervention, such as surgery, or the patient was artificially ventilated or in intensive care.Study Appraisal and Synthesis MethodsA quality assessment was completed with the QUIPS tool. Data on the reported accuracy of estimates and information about the clinicians were extracted. Studies were grouped by type of estimate: categorical (the clinician had a predetermined list of outcomes to choose from), continuous (open-ended estimate), or probabilistic (likelihood of surviving a particular time frame).Results4,642 records were identified; 42 studies fully met the review criteria. Wide variation was shown with categorical estimates (range 23% to 78%) and continuous estimates ranged between an underestimate of 86 days to an overestimate of 93 days. The four papers which used probabilistic estimates tended to show greater accuracy (c-statistics of 0.74–0.78). Information available about the clinicians providing the estimates was limited. Overall, there was no clear “expert” subgroup of clinicians identified.LimitationsHigh heterogeneity limited the analyses possible and prevented an overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified.ConclusionStudies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other.Implications of Key FindingsFurther research is needed to understand how clinical predictions are formulated and how their accuracy can be improved.
A robust finding in social psychology is that people judge negative events as less likely to happen to themselves than to the average person, a behavior interpreted as showing that people are "unrealistically optimistic" in their judgments of risk concerning future life events. However, we demonstrate how unbiased responses can result in data patterns commonly interpreted as indicative of optimism for purely statistical reasons. Specifically, we show how extant data from unrealistic optimism studies investigating people's comparative risk judgments are plagued by the statistical consequences of sampling constraints and the response scales used, in combination with the comparative rarity of truly negative events. We conclude that the presence of such statistical artifacts raises questions over the very existence of an optimistic bias about risk and implies that to the extent that such a bias exists, we know considerably less about its magnitude, mechanisms, and moderators than previously assumed.
We present a theoretical account of the origin of the shapes of utility, probability weighting, and temporal discounting functions. In an experimental test of the theory, we systematically change the shape of revealed utility, weighting, and discounting functions by manipulating the distribution of monies, probabilities, and delays in the choices used to elicit them. The data demonstrate that there is no stable mapping between attribute values and their subjective equivalents. Expected and discounted utility theories, and also their descendants such as prospect theory and hyperbolic discounting theory, simply assert stable mappings to describe choice data and offer no account of the instability we find. We explain where the shape of the mapping comes from and, in describing the mechanism by which people choose, explain why the shape depends on the distribution of gains, losses, risks, and delays in the environment. Data, as supplemental material, are available at http://dx.doi.org/10.1287/mnsc.2013.1853 . This paper was accepted by Yuval Rottenstreich, judgment and decision making.
Received academic wisdom holds that human judgment is characterized by unrealistic optimism, the tendency to underestimate the likelihood of negative events and overestimate the likelihood of positive events. With recent questions being raised over the degree to which the majority of this research genuinely demonstrates optimism, attention to possible mechanisms generating such a bias becomes ever more important. New studies have now claimed that unrealistic optimism emerges as a result of biased belief updating with distinctive neural correlates in the brain. On a behavioral level, these studies suggest that, for negative events, desirable information is incorporated into personal risk estimates to a greater degree than undesirable information (resulting in a more optimistic outlook). However, using task analyses, simulations, and experiments we demonstrate that this pattern of results is a statistical artifact. In contrast with previous work, we examined participants' use of new information with reference to the normative, Bayesian standard. Simulations reveal the fundamental difficulties that would need to be overcome by any robust test of optimistic updating. No such test presently exists, so that the best one can presently do is perform analyses with a number of techniques, all of which have important weaknesses. Applying these analyses to five experiments shows no evidence of optimistic updating. These results clarify the difficulties involved in studying human 'bias' and cast additional doubt over the status of optimism as a fundamental characteristic of healthy cognition.
Verbal probability expressions are frequently used to communicate risk and uncertainty. The Intergovernmental Panel on Climate Change (IPCC), for example, uses them to convey risks associated with climate change. Given the potential for human action to mitigate future environmental risks, it is important to understand how people respond to these expressions. In 3 studies employing a novel manipulation of event severity (so as to avoid any confound with event base rate), we demonstrated a systematic effect of event severity on the interpretation of verbal probability expressions. Challenging a previous finding in the literature, expressions referring to a severe event were interpreted as indicating a higher probability than those referring to a more neutral event. The finding was demonstrated in scenarios communicating risks relating to climate change (Studies 1 and 2) and replicated in scenarios involving nanotechnology and nuclear materials (Study 3). This is the first direct demonstration of an effect of outcome severity on the interpretation of verbal probability expressions, correcting a previous (potentially problematic) conclusion attributable to a flawed experimental design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.