The results of forensic science are believed to be reliable, and are widely used in support of verdicts around the world. However, due to the lack of suitable empirical studies, we actually know very little about the reliability of such results. In this paper, I argue that phenomena analogous to the main culprits for the replication crisis in psychology (questionable research practices, publication bias, or funding bias) are also present in forensic science. Therefore forensic results are significantly less reliable than is commonly believed. I conclude that in order to obtain reliable estimates for the reliability of forensic results, we need to conduct studies analogous to the large-scale replication projects in psychology. Additionally, I point to some ways for improving the reliability of forensic science, inspired by the reforms proposed in response to the Replicability Crisis.
Replicability is widely regarded as one of the defining features of science and its pursuit is one of the main postulates of meta-research, a discipline emerging in response to the replicability crisis. At the same time, replicability is typically treated with caution by philosophers of science. In this paper, we reassess the value of replicability from an epistemic perspective. We defend the orthodox view, according to which replications are always epistemically useful, against the more prudent view that claims that it is useful in very limited circumstances. Additionally, we argue that we can learn more about the original experiment and the limits of the discovered effect from replications at different levels. We hold that replicability is a crucial feature of experimental results and scientists should continue to strive to secure it.
Contextualist accounts of aesthetic predicates have difficulties explaining why we feel that speakers are disagreeing when they make true and compatible but superficially contradictory aesthetic judgments. One possible way to account for the disagreement is hybrid expressivism, which holds that the disagreement happens at the level of pragmatically conveyed, clashing contents about the speakers’ conative states. Marques (2016) defends such a strategy, combining dispositionalism about value, contextualism, and hybrid expressivism. This paper critically evaluates the plausibility of the suggested pragmatic mechanisms in conveying the kind of contents Marques takes to explain disagreements. The positive part suggests an alternative account of how aesthetic judgments are sources of information about speakers’ conative aesthetic states.
The Ramsey Test is considered to be the default test for the acceptability of indicative conditionals. I will argue that it is incompatible with some of the recent developments in conceptualizing conditionals, namely the growing empirical evidence for the Relevance Hypothesis. According to the hypothesis, one of the necessary conditions of acceptability for an indicative conditional is its antecedent being positively probabilistically relevant for the consequent. The source of the idea is Evidential Support Theory presented in Douven ( 2008). I will defend the hypothesis against alleged counterexamples, and show that is it supported by growing empirical evidence. Finally, I will present a version of the Ramsey test which incorporates the relevance condition and therefore is consistent with growing empirical evidence for the relevance hypothesis.
Indicative conditionals and tendency causal claims are closely related to each other (e.g., Frosch and Byrne 2012), but despite these connections, they are usually studied separately. A unifiying framework could consist in their dependence on probabilistic factors such as statistical relevance, but theoretical research along these lines (e.g., Eells 1991; Douven 2008, 2016) needs to be strengthened by more empirical results. This paper closes that gap and presents empirical results on how judgments on tendency causal claims and indicative conditionals are driven by probabilistic factors, and how these factors (in particular statistical relevance) differ in their predictive power for both causal and conditional claims.
In the last decade, many problematic cases of scientific conduct have been diagnosed; some of which involve outright fraud (e.g., Stapel, 2012) others are more subtle (e.g., supposed evidence of extrasensory perception; Bem, 2011). These and similar problems can be interpreted as caused by lack of scientific objectivity. The current philosophical theories of objectivity do not provide scientists with conceptualizations that can be effectively put into practice in remedying these issues. We propose a novel way of thinking about objectivity for individual scientists; a negative and dynamic approach.We provide a philosophical conceptualization of objectivity that is informed by empirical research. In particular, it is our intention to take the first steps in providing an empirically and methodologically informed inventory of factors that impair the scientific practice. The inventory will be compiled into a negative conceptualization (i.e., what is not objective), which could in principle be used by individual scientists to assess (deviations from) objectivity of scientific practice. We propose a preliminary outline of a usable and testable instrument for indicating the objectivity of scientific practice.
The chapter is devoted to the probability and acceptability of indicative conditionals. Focusing on three influential theses, the Equation, Adams' thesis, and the qualitative version of Adams' thesis, Sikorski argues that none of them is well supported by the available empirical evidence. In the most controversial case of the Equation, the results of many studies which support it are, at least to some degree, undermined by some recent experimental findings. Sikorski discusses the Ramsey Test, and Lewis's triviality proof, with special attention dedicated to the popular ways of blocking it. Sikorski concludes that the role of the three theses in future studies of conditionals should be re-thought, and he presents alternative proposals.
The Minimal Theory of Causation, presented in Graßhoff and May, 2001, aspires to be a version of a regularity analysis of causation able to correctly predict our causal intuitions. In my article, I will argue that it is unsuccessful in this respect. The second aim of the paper will be to defend Hitchcock’s proposal concerning divisions of causal relations (presented in Hitchcock, 2001) against criticism made, in Jakob, 2006 on the basis of the Minimal Theory of Causation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.