According to Bayesians, agents should respond to evidence by conditionalizing their prior degrees of belief on what they learn. The main aim of this paper is to demonstrate that there are common scenarios in which Bayesian conditionalization is less rational-both from an ecological and an internal perspective-than other theoretically well-motivated belief updating strategies, even in very simple situations and even for an "ideal" agent who is computationally unbounded. The examples also serve to demarcate the narrow conditions under which Bayesian conditionalization is guaranteed to be ecologically optimal. A second aim of the paper is to argue for a broader notion of rationality than what is typically assumed in formal epistemology. On this broader understanding of rationality, classical decision theoretic principles such as expected utility maximization play a less important role.