In this chapter we discuss the idea of complexity. While this concept is widely used, its meaning and interpretation usually remain implicit. We show that a mereological view, in which complexity is seen as composition of multiple unchanged parts, motivates an investigation that starts from the separation of causal factors, and their investigation in isolation. In contrast, we propose what we call ‘genuine complexity’, in which the parts of a whole not only compose and interact, but also change each other through such interaction. This, however, requires that we start an investigation from the higher level of complexity: by observing the whole. At such a level, indeed, it is possible to focus on interactions between context, lived experience and physical body parts. Several clinicians, globally, are pushing for a change in this direction. An ecological shift in medicine, we argue, will be not only necessary, but also unavoidable, if we acknowledge that human biology is genuinely complex, and we truly reflect on the meaning and implications of this.
Since the introduction of evidence-based medicine, there have been discussions about the epistemic primacy of randomised controlled trials (RCTs) for establishing causality in medicine and public health. A growing movement within philosophy of science calls instead for evidential pluralism: that we need more than one single method to investigate health outcomes. How should such evidential pluralism look in practice? How useful are the various methods available for causal inquiry? Further, how should different types of causal evidence be evaluated? This paper proposes a constructive answer and introduces a framework aimed at supporting scientists in developing appropriate methodological approaches for exploring causality. We start from the philosophical tradition that highlights intrinsic properties (dispositions, causal powers or capacities) as essential features of causality. This abstract idea has wide methodological implications. The paper explains how different methods, such as lab experiments, case studies, N-of-1 trials, case control studies, cohort studies, RCTs and patient narratives, all have some strengths and some limitations for picking out intrinsic causal properties. We explain why considering philosophy of causality is crucial for evaluating causality in the health sciences. In our proposal, we combine the various methods in a temporal process, which could then take us from an observed phenomenon (e.g., a correlation) to a causal hypothesis and, finally, to improved theoretical knowledge.
Scientists seek to eliminate all forms of bias from their research. However, all scientists also make assumptions of a non-empirical nature about topics such as causality, determinism and reductionism when conducting research. Here, we argue that since these 'philosophical biases' cannot be avoided, they need to be debated critically by scientists and philosophers of science.
In "The evidence that evidence-based medicine omits", Brendan Clarke and colleagues argue that when establishing causal facts in medicine, evidence of mechanisms ought to be included alongside evidence of correlations. One of the reasons they provide is that correlations can be spurious and generated by unknown confounding variables. A causal mechanism can provide a plausible explanation for the correlation, and the absence of such an explanation is an indication that the correlation is not causal. Evidence-based medicine (EBM) proponents remain sceptical about this argument, one problem being that the formulation of a mechanism requires judgements that are external to the evaluation of data and experimental designs-for instance judgements of plausibility against, or derivability from, background knowledge. Because background knowledge is always incomplete and therefore unreliable, EBM proponents maintain that the plausibility of a hypothesis should be evaluated mainly by the quality of population data that yielded it. Here, I use the example of oestrogen replacement therapy's effect on coronary heart disease, an example that is often quoted in defence of the epistemic advantage of randomized controlled trials, to show that the evaluation of the most reliable study design necessarily implies the adoption of judgements that are external to the specific evidence of correlation. The exclusion of evidence of mechanism, therefore, is not effective in bypassing paradigm-dependent judgements, which are external to specific evidence. Because such judgements cannot be excluded by evidence evaluation, they can only be kept under scrutiny, or adopted uncritically. I propose that the latter option can hinder the maintenance of an active critical inquiry, as well as the analysis of experts' disagreement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.