13This work considers a class of biologically plausible cost functions for neural networks, where 14 the same cost function is minimised by both neural activity and plasticity. We show that such 15 cost functions can be cast as a variational bound on model evidence under an implicit 16 generative model. Using generative models based on Markov decision processes (MDP), we 17show, analytically, that neural activity and plasticity perform Bayesian inference and learning, 18respectively, by maximising model evidence. Using mathematical and numerical analyses, we 19 then confirm that biologically plausible cost functions-used in neural 20networks-correspond to variational free energy under some prior beliefs about the 21 prevalence of latent states that generate inputs. These prior beliefs are determined by 22 particular constants (i.e., thresholds) that define the cost function. This means that the Bayes 23 optimal encoding of latent or hidden states is achieved when, and only when, the network's 24 implicit priors match the process that generates the inputs. Our results suggest that when a 25 neural network minimises its cost function, it is implicitly minimising variational free energy 26under optimal or sub-optimal prior beliefs. This insight is potentially important because it 27 suggests that any free parameter of a neural network's cost function can itself be 28optimised-by minimisation with respect to variational free energy. 29 30 Keywords: free-energy principle, variational Bayesian inference, learning algorithm, synaptic 31 plasticity, Markov decision process, blind source separation 32 33