This work tries to answer the question of what makes a query difficult. It addresses a novel model that captures the main components of a topic and the relationship between those components and topic difficulty. The three components of a topic are the textual expression describing the information need (the query or queries), the set of documents relevant to the topic (the Qrels), and the entire collection of documents. We show experimentally that topic difficulty strongly depends on the distances between these components. In the absence of knowledge about one of the model components, the model is still useful by approximating the missing component based on the other components. We demonstrate the applicability of the difficulty model for several uses such as predicting query difficulty, predicting the number of topic aspects expected to be covered by the search results, and analyzing the findability of a specific domain.
In this article, we address the apparent discrepancy between causal Bayes net theories of cognition, which posit that judgments of uncertainty are generated from causal beliefs in a way that respects the norms of probability, and evidence that probability judgments based on causal beliefs are systematically in error. One purported source of bias is the ease of reasoning forward from cause to effect (predictive reasoning) versus backward from effect to cause (diagnostic reasoning). Using causal Bayes nets, we developed a normative formulation of how predictive and diagnostic probability judgments should vary with the strength of alternative causes, causal power, and prior probability. This model was tested through two experiments that elicited predictive and diagnostic judgments as well as judgments of the causal parameters for a variety of scenarios that were designed to differ in strength of alternatives. Model predictions fit the diagnostic judgments closely, but predictive judgments displayed systematic neglect of alternative causes, yielding a relatively poor fit. Three additional experiments provided more evidence of the neglect of alternative causes in predictive reasoning and ruled out pragmatic explanations. We conclude that people use causal structure to generate probability judgments in a sophisticated but not entirely veridical way.
People are renowned for their failure to consider alternative hypotheses. We compare neglect of alternative causes when people make predictive versus diagnostic probability judgments. One study with medical professionals reasoning about psychopathology and two with undergraduates reasoning about goals and actions or about causal transmission yielded the same results: neglect of alternative causes when reasoning from cause to effect but not when reasoning from effect to cause. The findings suggest that framing a problem as a diagnostic-likelihood judgment can reduce bias.
Distinguishing cognitive systems that support intuition and deliberation has proven necessary to explain how people reason,1, 2 decide,3 categorize,4 form attitudes,5 make confidence6 and moral7 judgments, and prioritize goals.8 Both behavioral and neuroimaging evidence show that the evidence supports similar distinctions in each field. Deliberative processing enlists working memory, and intuitive processing depends more directly on long-term memory retrieval. One of the key unanswered questions concerns how the systems interact. The data suggest that one of the key functions of deliberation is to suppress intuition. It does not invariably succeed, however, and leakage is common. Another question concerns the relations between affect and reasoning systems. The evidence suggests that emotions are not exclusively related to the intuitive system. Instead, emotional reactions that are directly tied to the perception of objects and events (e.g., fear) are associated with intuition, emotions that arise when alternative possibilities are considered (e.g., regret) are tied to deliberation, and moods (e.g., happy, sad) influence how much each system is relied on. Copyright © 2010 John Wiley & Sons, Ltd. For further resources related to this article, please visit the WIREs website.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.