2 the problem of whether more coherence implies a higher likelihood of truth. 2 Let us say that coherence is truth conducive if it has that property. The question then is: If a system S is more coherent than another system S', are we then allowed to conclude that S is more likely than S' to be true as a whole? There are good reasons to pay attention to this particular question. First, as Peter Klein and Ted A. Warfield 3 point out, it asks for a minimal sense in which coherence could imply truth. It would seem difficult to maintain that coherence implies truth without also maintaining that more coherence implies a higher likelihood of truth. Second, it is relatively clear and unambiguous. By contrast, the question "Is a coherent system highly likely to be true?", for instance, suffers from serious vagueness-indeed, along two dimensions: concerning how high a degree of coherence it takes for a system to qualify as "coherent" as well as concerning how high a degree of likelihood it takes for a system to qualify as "highly" likely to be true. In the next two sections the task will be, first, to get clearer on what kind of property coherence is, and, second, to obtain a better understanding of how truth conduciveness should be construed, more precisely. It is implausible to think that coherence is truth conducive in the absence of further conditions: a well-composed novel is usually not true, and yet it may still be highly coherentperhaps far more so than reality itself. This raises the question of what the additional prerequisites might be, a topic that will be dealt with in sections IV and V. In the final section, I will return to the presystematic question of whether coherence implies truth and consider a different rendering of it. One of the theses advanced in this paper will be that common criticisms against a connection between coherence and truth are ill-founded, resting on an inadequate and uncharitable understanding of truth conduciveness. But I will also argue that even on a more adequate rendering of that notion, coherence is at best truth conducive in a very weak sense. II. THE CONCEPT OF COHERENCE C. I. Lewis defined coherence-or "congruence", to use his favored term-as follows (p. 338): 4
It is widely agreed that knowledge has greater value than mere true belief. This chapter begins by identifying a weak sense of ‘know’ (in which it means ‘believe truly’) under which knowledge cannot have greater value. There is a stronger sense of ‘know’ for which the value superiority thesis is plausible. The chapter offers two solutions to the swamping problem. The conditional probability solution states that reliabilist knowledge is more valuable than true belief because the former is a better indicator than the latter of future true belief. The second solution explains how a reliable process token can bring independent value into the picture. This can happen either because the value of a token process derives from the type it instantiates (type-instrumentalism) or because the value associated with a reliable process acquires independent, not merely derivative, value (value autonomization). The chapter's final section contrasts our approaches with those of virtue epistemology.
In a seminal book, Alvin I. Goldman outlines a theory for how to evaluate social practices with respect to their “veritistic value”, i.e., their tendency to promote the acquisition of true beliefs (and impede the acquisition of false beliefs) in society. In the same work, Goldman raises a number of serious worries for his account. Two of them concern the possibility of determining the veritistic value of a practice in a concrete case because (1) we often don't know what beliefs are actually true, and (2) even if we did, the task of determining the veritistic value would be computationally extremely difficult. Neither problem is specific to Goldman's theory and both can be expected to arise for just about any account of veritistic value. It is argued here that the first problem does not pose a serious threat to large classes of interesting practices. The bulk of the paper is devoted to the computational problem, which, it is submitted, can be addressed in promising terms by means of computer simulation. In an attempt to add vividness to this proposal, an up-and-running simulation environment (Laputa) is presented and put to some preliminary tests.
Much of what we believe we know, we know through the testimony of others (Coady, 1992). While there has been long-standing evidence that people are sensitive to the characteristics of the sources of testimony, for example in the context of persuasion, researchers have only recently begun to explore the wider implications of source reliability considerations for the nature of our beliefs. Likewise, much remains to be established concerning what factors influence source reliability. In this paper, we examine, both theoretically and empirically, the implications of using message content as a cue to source reliability. We present a set of experiments examining the relationship between source information and message content in people's responses to simple communications. The results show that people spontaneously revise their beliefs in the reliability of the source on the basis of the expectedness of a source's claim and, conversely, adjust message impact by perceived reliability; hence source reliability and message content have a bi-directional relationship. The implications are discussed for a variety of psychological, philosophical and political issues such as belief polarization and dual-route models of persuasion.
The paper describes a simulation environment for epistemic interaction based on a Bayesian model called Laputa. An interpretation of the model is proposed under which the exchanges taking place between inquirers are argumentative. The model, under this interpretation, is seen to survive the polarization test: if initially disposed to judge along the same lines inquirers in Laputa will adopt a more extreme position in the same direction as the effect of group deliberation, just like members of real argumentative bodies. Our model allows us to study what happens to mutual trust in the polarization process. We observe that inquirers become increasingly trusting which creates a snowball effect. We also study conditions under which inquirers will diverge and adopt contrary positions. To the extent that Bayesian reasoning is normatively correct, the bottom line is that polarization and divergence are not necessarily the result of mere irrational "group think" but that even ideally rational inquirers will predictably polarize or diverge under realistic conditions. The concluding section comments on the relation between the present model and the influential and empirically robust Persuasive Argument Theory (PAT), and it is argued that the former is essentially subsumable under the latter.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.