Probabilistic models have much to offer to epistemology and philosophy of science. Arguably, the coherence theory of justification claims that the more coherent a set of propositions is, the more confident one ought to be in its content, ceteris paribus. An impossibility result shows that there cannot exist a coherence ordering. A coherence quasi-ordering can be constructed that respects this claim and is relevant to scientific-theory choice. Bayesian-Network models of the reliability of information sources are made applicable to Condorcet-style jury voting, Tversky and Kahneman’s Linda puzzle, the variety-of-evidence thesis, the Duhem–Quine thesis, and the informational value of testimony.
Toy models are highly idealized and extremely simple models. Although they are omnipresent across scientific disciplines, toy models are a surprisingly under-appreciated subject in the philosophy of science. The main philosophical puzzle regarding toy models is that it is an unsettled question what the epistemic goal of toy modeling is. One promising proposal for answering this question is the claim that the epistemic goal of toy models is to provide individual scientists with understanding. The aim of this paper is to precisely articulate and to defend this claim. In particular, we will distinguish between autonomous and embedded toy models, and, then, argue that important examples of autonomous toy models are sometimes best interpreted to provide how-possibly understanding, while embedded toy models yield how-actually understanding, if certain conditions are satisfied. Contents 4 Two Kinds of Understanding With Toy Models 4.1 Embedded toy models and how-actually understanding. .. .. .. 4.2 Against a how-actually interpretation of all autonomous toy models 4.3 The how-possibly interpretation of some autonomous toy models. .
We reconsider the Nagelian theory of reduction and argue that, contrary to a widely held view, it is the right analysis of intertheoretic reduction. The alleged difficulties of the theory either vanish upon closer inspection or turn out to be substantive philosophical questions rather than knock-down arguments.
Effective field theories have been a very popular tool in quantum physics for almost two decades. And there are good reasons for this. I will argue that effective field theories share many of the advantages of both fundamental theories and phenomenological models, while avoiding their respective shortcomings. They are, for example, flexible enough to cover a wide range of phenomena, and concrete enough to provide a detailed story of the specific mechanisms at work at a given energy scale. So will all of physics eventually converge on effective field theories? This paper argues that good scientific research can be characterised by a fruitful interaction between fundamental theories, phenomenological models and effective field theories. All of them have their appropriate functions in the research process, and all of them are indispensable. They complement each other and hang together in a coherent way which I shall characterise in some detail. To illustrate all this I will present a case study from nuclear and particle physics. The resulting view about scientific theorising is inherently pluralistic, and has implications for the debates about reductionism and scientific explanation.
Scientific theories are hard to find, and once scientists have found a theory H, they often believe that there are not many distinct alternatives to H. But is this belief justified? What should scientists believe about the number of alternatives to H, and how should they change these beliefs in the light of new evidence? These are some of the questions that we will address in this paper. We also ask under which conditions failure to find an alternative to H confirms the theory in question. This kind of reasoning (which we call the No Alternatives Argument) is frequently used in science and therefore deserves a careful philosophical analysis.
Fundamental theories are hard to come by. But even if we had them, they would be too complicated to apply. Quantum chromodynamics (QCD) is a case in point. This theory is supposed to govern all strong interactions, but it is extremely hard to apply and test at energies where protons, neutrons and pions are the effective degrees of freedom. Instead, scientists typically use highly idealized models such as the MIT Bag Model or the Nambu Jona-Lasinio Model to account for phenomena in this domain, to explain them and to gain understanding. Based on these models, which typically isolate a single feature of QCD (confinement and chiral symmetry breaking respectively) and disregard many others, scientists attempt to get a better understanding of the physics of strong interactions. But does this practice make sense? Is it justified to use these models for the purposes at hand? Interestingly, these models do not even provide an accurate description of the mass spectrum of protons, neutrons and pions and their lowest lying excitations well -despite several adjustable parameters. And yet, the models are heavily used. I'll argue that a qualitative story, which establishes an explanatory link between the fundamental theory and a model, plays an important role in model acceptance in these cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.