A rational person doesn't believe just anything. There are limits on what it is rational to believe. How wide are these limits? That's the main question that interests me here. But a secondary question immediately arises: What factors impose these limits? A first stab is to say that one's evidence determines what it is epistemically permissible for one to believe. Many will claim that there are further, non-evidentiary factors relevant to the epistemic rationality of belief. I will be ignoring the details of alternative answers in order to focus on the question of what kind of rational constraints one's evidence puts on belief. Our main question concerns how far epistemic permission and obligation can come apart.
I argue that its appearing to you that P does not provide justification for believing that P unless you have independent justification for the denial of skeptical alternatives --hypotheses incompatible with P but such that if they were true, it would still appear to you that P. Thus I challenge the popular view of 'dogmatism,' according to which for some contents P, you need only lack reason to suspect that skeptical alternatives are true, in order for an experience as of P to justify belief that P. I pursue three lines of objection to dogmatism, having to do with probabilistic reasoning, considerations of future or hypothetically available justification, and epistemic circularity. I briefly sketch a fall-back position which avoids the problems raised.
We conducted a preregistered multilaboratory project ( k = 36; N = 3,531) to assess the size and robustness of ego-depletion effects using a novel replication method, termed the paradigmatic replication approach. Each laboratory implemented one of two procedures that was intended to manipulate self-control and tested performance on a subsequent measure of self-control. Confirmatory tests found a nonsignificant result ( d = 0.06). Confirmatory Bayesian meta-analyses using an informed-prior hypothesis (δ = 0.30, SD = 0.15) found that the data were 4 times more likely under the null than the alternative hypothesis. Hence, preregistered analyses did not find evidence for a depletion effect. Exploratory analyses on the full sample (i.e., ignoring exclusion criteria) found a statistically significant effect ( d = 0.08); Bayesian analyses showed that the data were about equally likely under the null and informed-prior hypotheses. Exploratory moderator tests suggested that the depletion effect was larger for participants who reported more fatigue but was not moderated by trait self-control, willpower beliefs, or action orientation.
I treat you as a thermometer when I use your belief states as more or less reliable indicators of the facts. Should I treat myself in a parallel way? Should I think of the outputs of my faculties and yours as like the readings of two thermometers the way a third party would? I explore some of the difficulties in answering these questions. If I am to treat myself as well as others as thermometers in this way, it would appear that I cannot reasonably trust my own convictions over yours unless I have antecedent reason to suppose that I am more likely than you to get things right. I appeal to some probabilistic considerations to suggest that our predicament as thermometers might not actually be as bad as it seems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.