role in scientific inference might seem problematic. Scientific research contributes to what Kitcher calls "public knowledge", "that body of shared information on which people draw in pursuing their own ends" (Kitcher, 2011, p.85). Given that different people hold different values, a value-laden science may fail to contribute to "public" knowledge. I think this is a serious concern, which outweighs the considerations in favour of a value-laden science.Therefore, in § §2 and 3, draw on an unusual combination of Kant and Richard Jeffrey to argue that scientific inference aimed at public communication should not take account of non-epistemic concerns, thereby blunting the arguments in §1. §4 discusses how these arguments relate to scientists' broader communicative obligations, including in neonicitinoid research, and to on-going debates over inductive risk and proper scientific inference. In conclusion I outline the broader implications of my arguments for understanding the "value free ideal" for science. §1 Inductive risk and the Floating Standards Obligation In 1953, Richard Rudner claimed that the scientist qua scientist "accepts or rejects hypotheses", but no hypothesis is ever completely verified by the available evidence; therefore, decisions about acceptance must turn on whether the evidence is "sufficiently strong" (Rudner, 1953, p.2). More recently, Heather Douglas has set out a similar problem: all agents, including scientists, face choices about whether to make empirical claims which are not deductively implied by available evidence (Douglas, 2009, p.87). Both argue for a similar response to these problems. For Rudner, decisions about whether evidence is sufficiently strong are "a function of the importance, in the typically ethical sense, of making a mistake in accepting or rejecting the hypothesis" (p.2, emphasis in original). Douglas argues that everyone, including scientists, has a moral responsibility to "consider the consequences of error" (p.87) when making claims. Therefore, science is not value-free, in that "scientists should consider the potential social and ethical consequences of error in their work, they should weigh the importance of those consequences, and they should set burdens of proof accordingly" (p.87).Rudner's argument convinced many philosophers: for example, Hempel (1965) andGaa (1977). More recently, following Douglas's work, the "argument from inductive risk" has become commonplace, assumed in work by Kitcher (2011, 141-155) and Kukla (2012, 853-855) with discussions of its theoretical implications (Steel, 2010) and its practical implications (for "trust" in science (Wilholt, 2012) and model construction (Biddle and Winsberg, 2012)). Indeed, some now claim that her argument does not go far enough (Brown, forthcoming). In this paper, I will follow Rudner and Douglas in assuming that scientists face problems of "inductive risk". I will, however, dispute their claims about how scientists must respond to these problems. To understand my proposals first it is necessary to clarify...