2019
DOI: 10.1609/aaai.v33i01.33012678
|View full text |Cite
|
Sign up to set email alerts
|

Abstracting Causal Models

Abstract: We consider a sequence of successively more restrictive definitions of abstraction for causal models, starting with a notion introduced by Rubenstein et al. (2017) called exact transformation that applies to probabilistic causal models, moving to a notion of uniform transformation that applies to deterministic causal models and does not allow differences to be hidden by the "right" choice of distribution, and then to abstraction, where the interventions of interest are determined by the map from low-level sta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
76
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 48 publications
(76 citation statements)
references
References 5 publications
0
76
0
Order By: Relevance
“…We mentioned the example of a pandemic leading to a national disaster because the administration refrained from taking appropriate precautionary measures. Our intuition is that people can still make causal judgments in complex situations like these because they have the ability to construct mental models that abstract away many of the low-level details of the situation (see Beckers & Halpern, 2019;Ullman et al, 2017). So instead of mentally simulating what actions each person would have taken, one would need to abstract away from this low level, and then consider the counterfactual dependence between the variables of interest on a higher level of abstraction.…”
Section: Omissions Beyond the Physicalmentioning
confidence: 99%
“…We mentioned the example of a pandemic leading to a national disaster because the administration refrained from taking appropriate precautionary measures. Our intuition is that people can still make causal judgments in complex situations like these because they have the ability to construct mental models that abstract away many of the low-level details of the situation (see Beckers & Halpern, 2019;Ullman et al, 2017). So instead of mentally simulating what actions each person would have taken, one would need to abstract away from this low level, and then consider the counterfactual dependence between the variables of interest on a higher level of abstraction.…”
Section: Omissions Beyond the Physicalmentioning
confidence: 99%
“…parts of the causal graph or some of the structural equation, might suffice in many contexts to prove that a change in classification is unjustified. Moreover, for conceptually lessstructured feature spaces, higher-order causal models (Beckers and Halpern 2019) where features such as objects are supervened by lower-order features such as pixels may provide the right level of description to define misclassification.…”
Section: Limitations and Open Problemsmentioning
confidence: 99%
“…If Hoel has provided only a formal approach to measuring the sense in which a higher level description of a system can be more informative, he will still have made a valuable contribution (cf. Beckers & Halpern, 2019 for another approach to this kind of question). By measuring the strength of an intervention in terms of effective information, he gives a helpful analysis of the formal circumstances under which a coarse-grained description of a system might be more (epistemically) beneficial than a fine-grained one, and consequently an analysis of the kinds of systems where an epistemically non-reductive 'special science' approach is most appropriate.…”
Section: Is It Really Emergent?mentioning
confidence: 99%