Interaction enables users to effectively navigate large amounts of data, supports cognitive processing, and increases methods of data representation. However, beyond popular beliefs, there have been few attempts to empirically demonstrate whether adding interaction to a static visualization improves its function. In this paper, we address this gap. We use a classic Bayesian reasoning task as a test bed for evaluating whether allowing users to interact with a static visualization can improve their reasoning. Through a crowdsourced study, we show that adding interaction to a static Bayesian reasoning visualization does not necessarily improve users’ accuracy on a Bayesian reasoning task, and in some cases can significantly detract from it. Moreover, we demonstrate that changes in performance are modulated by the design of the underlying visualization, and users’ spatial ability. Our work suggests that interaction is not as unambiguously good as we often believe.