Certain “generic” generalizations concern functions and purposes, e.g., cars are for driving. Some functional properties yield unacceptable teleological generics: for instance, cars are for parking seems false even though people park cars as often as they drive them. No theory of teleology in philosophy or psychology can explain what makes teleological generics acceptable. However, a recent theory (Prasada, 2017; Prasada & Dillingham, 2006; Prasada et al., 2013) argues that a certain type of mental representation – a “principled” connection between a kind and a property – licenses generic generalizations. The account predicts that people should accept teleological generics that describe kinds and properties linked by a principled connection. Under the analysis, car bears a principled connection to driving (a car’s primary purpose) and a non-principled connection to parking (an incidental consequence of driving). We report four experiments that tested and corroborated the theory’s predictions, and we describe a regression analysis that rules out alternative accounts. We conclude by showing how the theory we developed can serve as the foundation for a general theory of teleological thinking.
People more frequently select norm-violating factors, relative to norm- conforming ones, as the cause of some outcome. Until recently, this abnormal-selection effect has been studied using only retrospective vignette-based paradigms. In within-participants designs, we use a novel set of videos to investigate this effect for prospective causal judgments—i.e., judgments about the cause of some future outcome. Three experiments show that people more frequently select norm-violating factors, relative to norm-conforming ones, as the cause of some future outcome. We discuss these results in relation to recent efforts to model causal judgment.
No present theory explains the inferences people draw about the real world when reasoning about “bouletic” relations, i.e., predicates that express desires, such as 'want' in Lee wants to be in love. Linguistic accounts of 'want' define it in terms of a relation to a desirer’s beliefs, and how its complement is deemed desirable. In contrast, we describe a new model-based theory that posits that by default, desire predicates such as 'want' contrast desires against facts. In particular, 'A wants P' implies by default that P is not the case, because you cannot want what is already true. On further deliberation, reasoners may infer that A believes, but does not know for certain, that P is not the case. The theory makes several empirical predictions about how people interpret, assess the consistency of, and draw conclusions from desire predicates like 'want'. Seven experiments tested and validated the theory’s central predictions. We assess the theory in light of recent proposals of desire predicates.
In three experiments (n=208) participants verified disjunctions based on ‘or’. In Experiment 1, what could have happened instead of the facts biased participants’ judgments about which of two disjunctions was correct. In Experiment 2, participants used pictures of journeys to verify disjunctions such as: “You arrived at Exeter or Perth”. Given that you arrived at one of these destinations, if the other destination was once equally possible participants tended to verify the disjunction as: true and it couldn’t have been false, whereas if the other destination was impossible they tended to verify it as: true but it could have been false. Given that you failed to arrive at one of the destinations, the status of the other destination also yielded predictable verifications. In Experiment 3, analogous results occurred for judgments of true, false, and possibly true and possibly false. Participants therefore must have simulated counterfactual alternatives to verify disjunctions.
Pose the following problem to a smart eight-year-old: "All machines can break down. Alexa is a machine. What follows?" and the child is likely to reply: "Alexa can break down." So, as experiments confirm, human beings unschooled in logic are able to make deductions. Yet, this easy deduction defeats Alexa, Siri, and other virtual assistants. To build machines that reason, students of reasoning need to know the answers to three questions: (1) Which deductions do human reasoners make? (2) How do they make them? And (3) How can computers simulate them? The goal of this chapter is to describe the main efforts to simulate human deduction. It aims to provide its own intellectual life-support system so readers can understand it without having to consult anything else.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.