Resurgent interest in both mechanistic and counterfactual theories of explanation has led to a fair amount of discussion regarding the relative merits of these two approaches. James Woodward is currently the pre-eminent counterfactual theorist, and he criticizes the mechanists on the following grounds: Unless mechanists about explanation invoke counterfactuals, they cannot make sense of claims about causal interactions between mechanism parts or of causal explanations put forward absent knowledge of productive mechanisms. He claims that these shortfalls can be offset if mechanists will just borrow key tenets of his counterfactual theory of causal claims. What mechanists must bear in mind, however, is that by pursuing this course they risk both the assimilation of the mechanistic theories of explanation into Woodward's own favored counterfactual theory, and they risk the marginalization of mechanistic explanations to a proper subset of all explanations. An outcome more favorable to mechanists might be had by pursuing an actualist-mechanist theory of the contents of causal claims. While it may not seem obvious at first blush that such an approach is workable, even in principle, recent empirical research into causal perception, causal belief, and mechanical reasoning provides some grounds for optimism.
Theories concerning the structure, or format, of mental representation should (1) be formulated in mechanistic, rather than metaphorical terms; (2) do justice to several philosophical intuitions about mental representation; and (3) explain the human capacity to predict the consequences of worldly alterations (i.e., to think before we act). The hypothesis that thinking involves the application of syntax-sensitive inference rules to syntactically structured mental representations has been said to satisfy all three conditions. An alternative hypothesis is that thinking requires the construction and manipulation of the cognitive equivalent of scale models. A reading of this hypothesis is provided that satisfies condition (1) and which, even though it may not fully satisfy condition (2), turns out (in light of the frame problem) to be the only known way to satisfy condition (3).
A groundbreaking argument challenging the traditional linguistic representational model of cognition proposes that representational states should be conceptualized as the cognitive equivalent of scale models. In this groundbreaking book, Jonathan Waskan challenges cognitive science's dominant model of mental representation and proposes a novel, well-devised alternative. The traditional view in the cognitive sciences uses a linguistic (propositional) model of mental representation. This logic-based model of cognition informs and constrains both the classical tradition of artificial intelligence and modeling in the connectionist tradition. It falls short, however, when confronted by the frame problem鈥攖he lack of a principled way to determine which features of a representation must be updated when new information becomes available. Proposed alternatives, including the imagistic model, have not so far resolved this problem. Waskan proposes instead the Intrinsic Cognitive Models (ICM) hypothesis, which argues that representational states can be conceptualized as the cognitive equivalent of scale models. Waskan argues further that the proposal that humans harbor and manipulate these cognitive counterparts to scale models offers the only viable explanation for what most clearly differentiates humans from other creatures: their capacity to engage in truth-preserving manipulation of representations. Bradford Books imprint
Here I consider the relative merits of two recent models of explanation, James Woodward's interventionist-counterfactual model and the model model. According to the former, explanations are largely constituted by information about the consequences of counterfactual interventions. Problems arise for this approach because countless relevant interventions are possible in most cases and because it overlooks other kinds of equally relevant information. According the model model, explanations are largely constituted by cognitive models of actual mechanisms. On this approach, explanations tend not to represent any of the aforementioned information explicitly but can instead be used to produce it on demand. The model model thus offers the more plausible account of the information of which we are aware when we have an explanation and of the ratiocinative process through which we derive many kinds of information that are relevant to the evaluation of explanations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.