A glaring asymmetry, obvious at this meeting, is that historians dress better than philosophershistorians always being interested in the details, sartorial and otherwise, while philosophers seem concerned only with dressing in general.-Richards (1992)Abstract We respond to two kinds of skepticism about integrated history and philosophy of science: foundational and methodological. Foundational skeptics doubt that the history and the philosophy of science have much to gain from each other in principle. We therefore discuss some of the unique rewards of work at the intersection of the two disciplines. By contrast, methodological skeptics already believe that the two disciplines should be related to each other, but they doubt that this can be done successfully. Their worries are captured by the so-called dilemma of case studies: On one horn of the dilemma, we begin our integrative enterprise with philosophy and proceed from there to history, in which case we may well be selecting our historical cases so as to fit our preconceived philosophical theses. On the other horn, we begin with history and proceed to philosophical reflection, in which case we are prone to unwarranted generalization from particulars. Against worries about selection bias, we argue that we routinely need to make explicit the criteria for choosing particular historical cases to investigate particular philosophical theses. It then becomes possible to ask whether or not the selection criteria were biased. Against worries about unwarranted generalization, we stress the iterative nature of the process by which historical data and philosophical concepts are brought into alignment. The skeptics' doubts are fueled by an outdated model of outright confirmation vs. outright falsification of philosophical concepts. A more appropriate model is one of stepwise and piecemeal improvement.
This paper argues that a notion of statistical explanation, based on Salmon’s statistical relevance model, can help us better understand deep neural networks. It is proved that homogeneous partitions, the core notion of Salmon’s model, are equivalent to minimal sufficient statistics, an important notion from statistical inference. This establishes a link to deep neural networks via the so-called Information Bottleneck method, an information-theoretic framework, according to which deep neural networks implicitly solve an optimization problem that generalizes minimal sufficient statistics. The resulting notion of statistical explanation is general, mathematical, and subcausal.
Machine learning methods have recently created high expectations in the climate modelling context in view of addressing climate change, but they are often considered as non-physics-based 'black boxes' that may not provide any understanding. However, in many ways, understanding seems indispensable to appropriately evaluate climate models and to build confidence in climate projections. Relying on two case studies, we compare how machine learning and standard statistical techniques affect our ability to understand the climate system. For that purpose, we put five evaluative criteria of understanding to work: intelligibility, representational accuracy, empirical accuracy, coherence with background knowledge, and assessment of the domain of validity. We argue that the two families of methods are part of the same continuum where these various criteria of understanding come in degrees, and that therefore machine learning methods do not necessarily constitute a radical departure from standard statistical tools, as far as understanding is concerned. * We thank the participants of the philosophy of science research colloquium in the Spring semester 2020 at the University of Bern for valuable feedback on an earlier draft of the paper. We also wish to thank the participants of the seminar 'Philosophy of science perspectives on the climate challenge' and the workshop 'Big data, machine learning, climate modelling & understanding' in the Fall semester 2019 at the University of Bern and supported by the Oeschger Centre for Climate Change Research. JJ and VL are grateful to the Swiss National Science Foundation for financial support (grant PP00P1_170460). TR was funded by the cogito foundation.
Some machine learning models, in particular deep neural networks (DNNs), are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, contra Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. Sullivan’s claim hinges on which notion of understanding is at play. If we employ a weak notion of understanding, then her claim is tenable, but rather weak. If, however, we employ a strong notion of understanding, particularly explanatory understanding, then her claim is not tenable.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.