This paper proposes a formal framework for modeling the interaction of causal and (qualitative) epistemic reasoning. To this purpose, we extend the notion of a causal model [16,17,26,11] with a representation of the epistemic state of an agent. On the side of the object language, we add operators to express knowledge and the act of observing new information. We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
This paper makes a first step towards a logic of learning from experiments. For this, we investigate formal frameworks for modeling the interaction of causal and (qualitative) epistemic reasoning. Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment (Pearl, 2009, Causality. Models, Reasoning, and Inference, 2nd edn. Cambridge University Press, Cambridge; Woodward, 2003, Making Things Happen, vol. 114 of Oxford Studies in the Philosophy of Science. Oxford University Press). In a first step we extend a causal model (Briggs, 2012, Philosophical Studies, 160, 139–166; Galles and Pearl, 1998, An axiomatic characterisation of causal counterfactuals. Foundations of Science, 3, 151–182; Halpern, 2000, Axiomatizing causal reasoning. Journal of Artificial Intelligence Research, 12, 317–337; Pearl, 2009, Causality. Models, Reasoning, and Inference, 2nd edn. Cambridge University Press, Cambridge) with a simple Hintikka-style representation of the epistemic state of an agent. In the resulting setting, one can talk about the knowledge of an agent and information update. The resulting logic can model reasoning about thought experiments. However, it is unable to account for learning from experiments, which is clearly brought out by the fact that it validates the principle of no learning for interventions. Therefore, in a second step, we implement a more complex notion of knowledge (Nozick, 1981, Philosophical Explanations. Harvard University Press, Cambridge, Massachusetts) that allows an agent to observe (measure) certain variables when an experiment is carried out. This extended system does allow for learning from experiments. For all the proposed logics, we provide a sound and complete axiomatization.
The paper focuses on a recent challenge brought forward against the interventionist approach to the meaning of counterfactual conditionals. According to this objection, interventionism cannot in general account for the interpretation of right-nested counterfactuals, the problem being its strict interventionism. We will report on the results of an empirical study supporting the objection, and we will extend the well-known logic of actual causality with a new operator expressing an alternative notion of intervention that does not suffer from the problem (and thus can account for some critical examples). The core idea of the alternative approach is a new notion of intervention, which operates on the evaluation of the variables in a causal model, and not on their functional dependencies. Our result provides new insights into the logical analysis of causal reasoning.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.