We present an efficient, principled, and interpretable technique for inferring module assignments and for identifying the optimal number of modules in a given network. We show how several existing methods for finding modules can be described as variant, special, or limiting cases of our work, and how the method overcomes the resolution limit problem, accurately recovering the true number of modules. Our approach is based on Bayesian methods for model selection which have been used with success for almost a century, implemented using a variational technique developed only in the past decade. We apply the technique to synthetic and real networks and outline how the method naturally allows selection among competing models.Large-scale networks describing complex interactions among a multitude of objects have found application in a wide array of fields, from biology to social science to information technology [1,2]. In these applications one often wishes to model networks, suppressing the complexity of the full description while retaining relevant information about the structure of the interactions [3]. One such network model groups nodes into modules, or "communities," with different densities of intra-and interconnectivity for nodes in the same or different modules. We present here a computationally efficient Bayesian framework for inferring the number of modules, model parameters, and module assignments for such a model.The problem of finding modules in networks (or "community detection") has received much attention in the physics literature, wherein many approaches [4,5] focus on optimizing an energy-based cost function with fixed parameters over possible assignments of nodes into modules. The particular cost functions vary, but most compare a given node partitioning to an implicit null model, the two most popular being the configuration model and a limited version of the stochastic block model (SBM) [6,7]. While much effort has gone into how to optimize these cost functions, less attention has been paid to what is to be optimized. In recent studies which emphasize the importance of the latter question it was shown that there are inherent problems with existing approaches regardless of how optimization is performed, wherein parameter choice sets a lower limit on the size of detected modules, referred to as the "resolution limit" problem [8,9]. We extend recent probabilistic treatments of modular networks [10,11] to develop a solution to this problem that relies on inferring distributions over the model parameters, as opposed to asserting parameter values a priori, to determine the modular structure of a given network. The developed techniques are principled, interpretable, computationally efficient, and can be shown to generalize several previous studies on module detection.